00:00:00.000 Started by upstream project "autotest-per-patch" build number 126245 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.027 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.028 The recommended git tool is: git 00:00:00.028 using credential 00000000-0000-0000-0000-000000000002 00:00:00.031 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.046 Fetching changes from the remote Git repository 00:00:00.050 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.081 Using shallow fetch with depth 1 00:00:00.081 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.081 > git --version # timeout=10 00:00:00.126 > git --version # 'git version 2.39.2' 00:00:00.126 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.163 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.163 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.269 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.280 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.291 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.291 > git config core.sparsecheckout # timeout=10 00:00:03.301 > git read-tree -mu HEAD # timeout=10 00:00:03.316 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.335 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.335 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.415 [Pipeline] Start of Pipeline 00:00:03.430 [Pipeline] library 00:00:03.432 Loading library shm_lib@master 00:00:03.432 Library shm_lib@master is cached. Copying from home. 00:00:03.447 [Pipeline] node 00:00:03.453 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.457 [Pipeline] { 00:00:03.470 [Pipeline] catchError 00:00:03.471 [Pipeline] { 00:00:03.485 [Pipeline] wrap 00:00:03.495 [Pipeline] { 00:00:03.503 [Pipeline] stage 00:00:03.506 [Pipeline] { (Prologue) 00:00:03.730 [Pipeline] sh 00:00:04.008 + logger -p user.info -t JENKINS-CI 00:00:04.024 [Pipeline] echo 00:00:04.025 Node: GP11 00:00:04.030 [Pipeline] sh 00:00:04.324 [Pipeline] setCustomBuildProperty 00:00:04.334 [Pipeline] echo 00:00:04.335 Cleanup processes 00:00:04.340 [Pipeline] sh 00:00:04.619 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.619 1045660 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.630 [Pipeline] sh 00:00:04.909 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.909 ++ grep -v 'sudo pgrep' 00:00:04.909 ++ awk '{print $1}' 00:00:04.909 + sudo kill -9 00:00:04.909 + true 00:00:04.920 [Pipeline] cleanWs 00:00:04.928 [WS-CLEANUP] Deleting project workspace... 00:00:04.928 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.957 [WS-CLEANUP] done 00:00:04.961 [Pipeline] setCustomBuildProperty 00:00:04.975 [Pipeline] sh 00:00:05.252 + sudo git config --global --replace-all safe.directory '*' 00:00:05.348 [Pipeline] httpRequest 00:00:05.372 [Pipeline] echo 00:00:05.374 Sorcerer 10.211.164.101 is alive 00:00:05.379 [Pipeline] httpRequest 00:00:05.383 HttpMethod: GET 00:00:05.384 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.384 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.387 Response Code: HTTP/1.1 200 OK 00:00:05.387 Success: Status code 200 is in the accepted range: 200,404 00:00:05.387 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.154 [Pipeline] sh 00:00:06.440 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.455 [Pipeline] httpRequest 00:00:06.473 [Pipeline] echo 00:00:06.475 Sorcerer 10.211.164.101 is alive 00:00:06.482 [Pipeline] httpRequest 00:00:06.485 HttpMethod: GET 00:00:06.486 URL: http://10.211.164.101/packages/spdk_958a93494ad9e56a007efe7af17492dfed1ddd12.tar.gz 00:00:06.487 Sending request to url: http://10.211.164.101/packages/spdk_958a93494ad9e56a007efe7af17492dfed1ddd12.tar.gz 00:00:06.489 Response Code: HTTP/1.1 200 OK 00:00:06.490 Success: Status code 200 is in the accepted range: 200,404 00:00:06.490 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_958a93494ad9e56a007efe7af17492dfed1ddd12.tar.gz 00:00:28.214 [Pipeline] sh 00:00:28.497 + tar --no-same-owner -xf spdk_958a93494ad9e56a007efe7af17492dfed1ddd12.tar.gz 00:00:31.042 [Pipeline] sh 00:00:31.326 + git -C spdk log --oneline -n5 00:00:31.326 958a93494 scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default 00:00:31.326 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:00:31.326 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:00:31.326 2d30d9f83 accel: introduce tasks in sequence limit 00:00:31.326 2728651ee accel: adjust task per ch define name 00:00:31.339 [Pipeline] } 00:00:31.358 [Pipeline] // stage 00:00:31.368 [Pipeline] stage 00:00:31.370 [Pipeline] { (Prepare) 00:00:31.390 [Pipeline] writeFile 00:00:31.408 [Pipeline] sh 00:00:31.686 + logger -p user.info -t JENKINS-CI 00:00:31.700 [Pipeline] sh 00:00:31.981 + logger -p user.info -t JENKINS-CI 00:00:31.995 [Pipeline] sh 00:00:32.278 + cat autorun-spdk.conf 00:00:32.278 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.278 SPDK_TEST_NVMF=1 00:00:32.278 SPDK_TEST_NVME_CLI=1 00:00:32.278 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.278 SPDK_TEST_NVMF_NICS=e810 00:00:32.278 SPDK_TEST_VFIOUSER=1 00:00:32.278 SPDK_RUN_UBSAN=1 00:00:32.278 NET_TYPE=phy 00:00:32.285 RUN_NIGHTLY=0 00:00:32.290 [Pipeline] readFile 00:00:32.348 [Pipeline] withEnv 00:00:32.350 [Pipeline] { 00:00:32.361 [Pipeline] sh 00:00:32.643 + set -ex 00:00:32.643 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:32.643 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:32.643 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.643 ++ SPDK_TEST_NVMF=1 00:00:32.643 ++ SPDK_TEST_NVME_CLI=1 00:00:32.643 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.643 ++ SPDK_TEST_NVMF_NICS=e810 00:00:32.643 ++ SPDK_TEST_VFIOUSER=1 00:00:32.643 ++ SPDK_RUN_UBSAN=1 00:00:32.643 ++ NET_TYPE=phy 00:00:32.643 ++ RUN_NIGHTLY=0 00:00:32.643 + case $SPDK_TEST_NVMF_NICS in 00:00:32.643 + DRIVERS=ice 00:00:32.643 + [[ tcp == \r\d\m\a ]] 00:00:32.643 + [[ -n ice ]] 00:00:32.643 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:32.643 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:32.643 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:32.643 rmmod: ERROR: Module irdma is not currently loaded 00:00:32.643 rmmod: ERROR: Module i40iw is not currently loaded 00:00:32.643 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:32.643 + true 00:00:32.643 + for D in $DRIVERS 00:00:32.643 + sudo modprobe ice 00:00:32.643 + exit 0 00:00:32.651 [Pipeline] } 00:00:32.669 [Pipeline] // withEnv 00:00:32.675 [Pipeline] } 00:00:32.693 [Pipeline] // stage 00:00:32.704 [Pipeline] catchError 00:00:32.706 [Pipeline] { 00:00:32.720 [Pipeline] timeout 00:00:32.720 Timeout set to expire in 50 min 00:00:32.722 [Pipeline] { 00:00:32.734 [Pipeline] stage 00:00:32.735 [Pipeline] { (Tests) 00:00:32.750 [Pipeline] sh 00:00:33.028 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.028 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.028 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.028 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:33.028 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.028 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:33.028 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:33.028 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:33.028 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:33.028 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:33.028 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:33.028 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.028 + source /etc/os-release 00:00:33.028 ++ NAME='Fedora Linux' 00:00:33.028 ++ VERSION='38 (Cloud Edition)' 00:00:33.028 ++ ID=fedora 00:00:33.028 ++ VERSION_ID=38 00:00:33.028 ++ VERSION_CODENAME= 00:00:33.028 ++ PLATFORM_ID=platform:f38 00:00:33.028 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:33.028 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:33.028 ++ LOGO=fedora-logo-icon 00:00:33.028 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:33.028 ++ HOME_URL=https://fedoraproject.org/ 00:00:33.028 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:33.028 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:33.028 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:33.028 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:33.028 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:33.028 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:33.028 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:33.028 ++ SUPPORT_END=2024-05-14 00:00:33.028 ++ VARIANT='Cloud Edition' 00:00:33.028 ++ VARIANT_ID=cloud 00:00:33.028 + uname -a 00:00:33.028 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:33.028 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:33.965 Hugepages 00:00:33.965 node hugesize free / total 00:00:33.965 node0 1048576kB 0 / 0 00:00:33.965 node0 2048kB 0 / 0 00:00:33.965 node1 1048576kB 0 / 0 00:00:33.965 node1 2048kB 0 / 0 00:00:33.965 00:00:33.965 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:33.965 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:33.965 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:33.965 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:33.965 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:33.965 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:33.965 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:33.965 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:33.965 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:33.965 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:33.965 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:33.965 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:33.965 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:33.965 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:33.965 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:33.965 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:33.965 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:33.965 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:33.965 + rm -f /tmp/spdk-ld-path 00:00:33.965 + source autorun-spdk.conf 00:00:33.965 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.965 ++ SPDK_TEST_NVMF=1 00:00:33.965 ++ SPDK_TEST_NVME_CLI=1 00:00:33.965 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.965 ++ SPDK_TEST_NVMF_NICS=e810 00:00:33.965 ++ SPDK_TEST_VFIOUSER=1 00:00:33.965 ++ SPDK_RUN_UBSAN=1 00:00:33.965 ++ NET_TYPE=phy 00:00:33.965 ++ RUN_NIGHTLY=0 00:00:33.965 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:33.965 + [[ -n '' ]] 00:00:33.965 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.965 + for M in /var/spdk/build-*-manifest.txt 00:00:33.965 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:33.965 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.965 + for M in /var/spdk/build-*-manifest.txt 00:00:33.965 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:33.965 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.965 ++ uname 00:00:33.965 + [[ Linux == \L\i\n\u\x ]] 00:00:33.965 + sudo dmesg -T 00:00:33.965 + sudo dmesg --clear 00:00:34.229 + dmesg_pid=1046954 00:00:34.229 + [[ Fedora Linux == FreeBSD ]] 00:00:34.229 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.229 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.229 + sudo dmesg -Tw 00:00:34.229 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:34.229 + [[ -x /usr/src/fio-static/fio ]] 00:00:34.229 + export FIO_BIN=/usr/src/fio-static/fio 00:00:34.229 + FIO_BIN=/usr/src/fio-static/fio 00:00:34.229 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:34.229 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:34.229 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:34.229 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.229 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.229 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:34.229 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.229 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.229 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:34.229 Test configuration: 00:00:34.229 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.229 SPDK_TEST_NVMF=1 00:00:34.229 SPDK_TEST_NVME_CLI=1 00:00:34.229 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.229 SPDK_TEST_NVMF_NICS=e810 00:00:34.229 SPDK_TEST_VFIOUSER=1 00:00:34.229 SPDK_RUN_UBSAN=1 00:00:34.229 NET_TYPE=phy 00:00:34.229 RUN_NIGHTLY=0 22:24:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:34.229 22:24:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:34.229 22:24:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:34.229 22:24:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:34.229 22:24:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.229 22:24:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.229 22:24:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.229 22:24:17 -- paths/export.sh@5 -- $ export PATH 00:00:34.229 22:24:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.229 22:24:17 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:34.229 22:24:17 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:34.229 22:24:17 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721075057.XXXXXX 00:00:34.229 22:24:17 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721075057.BTPOun 00:00:34.229 22:24:17 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:34.229 22:24:17 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:34.229 22:24:17 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:34.229 22:24:17 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:34.229 22:24:17 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:34.229 22:24:17 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:34.229 22:24:17 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:00:34.229 22:24:17 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.229 22:24:17 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:34.229 22:24:17 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:34.229 22:24:17 -- pm/common@17 -- $ local monitor 00:00:34.229 22:24:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.229 22:24:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.229 22:24:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.229 22:24:17 -- pm/common@21 -- $ date +%s 00:00:34.229 22:24:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.229 22:24:17 -- pm/common@21 -- $ date +%s 00:00:34.229 22:24:17 -- pm/common@25 -- $ sleep 1 00:00:34.229 22:24:17 -- pm/common@21 -- $ date +%s 00:00:34.229 22:24:17 -- pm/common@21 -- $ date +%s 00:00:34.229 22:24:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721075057 00:00:34.229 22:24:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721075057 00:00:34.229 22:24:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721075057 00:00:34.229 22:24:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721075057 00:00:34.229 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721075057_collect-vmstat.pm.log 00:00:34.230 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721075057_collect-cpu-load.pm.log 00:00:34.230 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721075057_collect-cpu-temp.pm.log 00:00:34.230 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721075057_collect-bmc-pm.bmc.pm.log 00:00:35.166 22:24:18 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:35.166 22:24:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:35.166 22:24:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:35.166 22:24:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:35.166 22:24:18 -- spdk/autobuild.sh@16 -- $ date -u 00:00:35.166 Mon Jul 15 08:24:18 PM UTC 2024 00:00:35.166 22:24:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:35.166 v24.09-pre-210-g958a93494 00:00:35.166 22:24:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:35.166 22:24:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:35.166 22:24:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:35.166 22:24:18 -- common/autotest_common.sh@1093 -- $ '[' 3 -le 1 ']' 00:00:35.166 22:24:18 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:00:35.166 22:24:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.166 ************************************ 00:00:35.166 START TEST ubsan 00:00:35.166 ************************************ 00:00:35.166 22:24:18 ubsan -- common/autotest_common.sh@1117 -- $ echo 'using ubsan' 00:00:35.166 using ubsan 00:00:35.166 00:00:35.166 real 0m0.000s 00:00:35.166 user 0m0.000s 00:00:35.166 sys 0m0.000s 00:00:35.166 22:24:18 ubsan -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:00:35.166 22:24:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:35.166 ************************************ 00:00:35.166 END TEST ubsan 00:00:35.166 ************************************ 00:00:35.166 22:24:18 -- common/autotest_common.sh@1136 -- $ return 0 00:00:35.166 22:24:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:35.166 22:24:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:35.166 22:24:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:35.166 22:24:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:35.166 22:24:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:35.166 22:24:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:35.166 22:24:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:35.166 22:24:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:35.166 22:24:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:35.424 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:35.424 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:35.683 Using 'verbs' RDMA provider 00:00:46.224 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:56.207 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:56.207 Creating mk/config.mk...done. 00:00:56.207 Creating mk/cc.flags.mk...done. 00:00:56.207 Type 'make' to build. 00:00:56.207 22:24:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:56.207 22:24:38 -- common/autotest_common.sh@1093 -- $ '[' 3 -le 1 ']' 00:00:56.207 22:24:38 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:00:56.207 22:24:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.207 ************************************ 00:00:56.207 START TEST make 00:00:56.207 ************************************ 00:00:56.207 22:24:38 make -- common/autotest_common.sh@1117 -- $ make -j48 00:00:56.207 make[1]: Nothing to be done for 'all'. 00:00:57.151 The Meson build system 00:00:57.151 Version: 1.3.1 00:00:57.151 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:57.151 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:57.151 Build type: native build 00:00:57.151 Project name: libvfio-user 00:00:57.151 Project version: 0.0.1 00:00:57.151 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:57.151 C linker for the host machine: cc ld.bfd 2.39-16 00:00:57.151 Host machine cpu family: x86_64 00:00:57.151 Host machine cpu: x86_64 00:00:57.151 Run-time dependency threads found: YES 00:00:57.151 Library dl found: YES 00:00:57.151 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:57.151 Run-time dependency json-c found: YES 0.17 00:00:57.151 Run-time dependency cmocka found: YES 1.1.7 00:00:57.151 Program pytest-3 found: NO 00:00:57.151 Program flake8 found: NO 00:00:57.151 Program misspell-fixer found: NO 00:00:57.151 Program restructuredtext-lint found: NO 00:00:57.151 Program valgrind found: YES (/usr/bin/valgrind) 00:00:57.151 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:57.151 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:57.151 Compiler for C supports arguments -Wwrite-strings: YES 00:00:57.151 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:57.151 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:57.151 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:57.151 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:57.151 Build targets in project: 8 00:00:57.151 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:57.151 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:57.151 00:00:57.151 libvfio-user 0.0.1 00:00:57.151 00:00:57.151 User defined options 00:00:57.151 buildtype : debug 00:00:57.151 default_library: shared 00:00:57.151 libdir : /usr/local/lib 00:00:57.151 00:00:57.151 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:58.096 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:58.096 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:58.096 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:58.096 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:58.096 [4/37] Compiling C object samples/null.p/null.c.o 00:00:58.096 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:58.096 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:58.096 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:58.096 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:58.096 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:58.096 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:58.096 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:58.096 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:58.096 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:58.096 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:58.096 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:58.096 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:58.096 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:58.096 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:58.358 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:58.358 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:58.358 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:58.358 [22/37] Compiling C object samples/server.p/server.c.o 00:00:58.358 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:58.358 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:58.358 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:58.358 [26/37] Compiling C object samples/client.p/client.c.o 00:00:58.358 [27/37] Linking target samples/client 00:00:58.358 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:58.358 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:00:58.622 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:58.622 [31/37] Linking target test/unit_tests 00:00:58.622 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:58.622 [33/37] Linking target samples/null 00:00:58.884 [34/37] Linking target samples/gpio-pci-idio-16 00:00:58.885 [35/37] Linking target samples/shadow_ioeventfd_server 00:00:58.885 [36/37] Linking target samples/lspci 00:00:58.885 [37/37] Linking target samples/server 00:00:58.885 INFO: autodetecting backend as ninja 00:00:58.885 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:58.885 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:59.464 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:59.464 ninja: no work to do. 00:01:04.738 The Meson build system 00:01:04.738 Version: 1.3.1 00:01:04.738 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:04.738 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:04.738 Build type: native build 00:01:04.738 Program cat found: YES (/usr/bin/cat) 00:01:04.738 Project name: DPDK 00:01:04.738 Project version: 24.03.0 00:01:04.738 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:04.738 C linker for the host machine: cc ld.bfd 2.39-16 00:01:04.738 Host machine cpu family: x86_64 00:01:04.738 Host machine cpu: x86_64 00:01:04.738 Message: ## Building in Developer Mode ## 00:01:04.738 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:04.738 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:04.738 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:04.738 Program python3 found: YES (/usr/bin/python3) 00:01:04.738 Program cat found: YES (/usr/bin/cat) 00:01:04.738 Compiler for C supports arguments -march=native: YES 00:01:04.738 Checking for size of "void *" : 8 00:01:04.738 Checking for size of "void *" : 8 (cached) 00:01:04.738 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:04.738 Library m found: YES 00:01:04.738 Library numa found: YES 00:01:04.738 Has header "numaif.h" : YES 00:01:04.738 Library fdt found: NO 00:01:04.738 Library execinfo found: NO 00:01:04.738 Has header "execinfo.h" : YES 00:01:04.738 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:04.738 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:04.738 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:04.738 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:04.738 Run-time dependency openssl found: YES 3.0.9 00:01:04.738 Run-time dependency libpcap found: YES 1.10.4 00:01:04.738 Has header "pcap.h" with dependency libpcap: YES 00:01:04.738 Compiler for C supports arguments -Wcast-qual: YES 00:01:04.738 Compiler for C supports arguments -Wdeprecated: YES 00:01:04.738 Compiler for C supports arguments -Wformat: YES 00:01:04.738 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:04.738 Compiler for C supports arguments -Wformat-security: NO 00:01:04.738 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:04.738 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:04.738 Compiler for C supports arguments -Wnested-externs: YES 00:01:04.738 Compiler for C supports arguments -Wold-style-definition: YES 00:01:04.738 Compiler for C supports arguments -Wpointer-arith: YES 00:01:04.738 Compiler for C supports arguments -Wsign-compare: YES 00:01:04.738 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:04.738 Compiler for C supports arguments -Wundef: YES 00:01:04.738 Compiler for C supports arguments -Wwrite-strings: YES 00:01:04.738 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:04.738 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:04.738 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:04.738 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:04.738 Program objdump found: YES (/usr/bin/objdump) 00:01:04.738 Compiler for C supports arguments -mavx512f: YES 00:01:04.738 Checking if "AVX512 checking" compiles: YES 00:01:04.738 Fetching value of define "__SSE4_2__" : 1 00:01:04.738 Fetching value of define "__AES__" : 1 00:01:04.738 Fetching value of define "__AVX__" : 1 00:01:04.738 Fetching value of define "__AVX2__" : (undefined) 00:01:04.738 Fetching value of define "__AVX512BW__" : (undefined) 00:01:04.738 Fetching value of define "__AVX512CD__" : (undefined) 00:01:04.738 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:04.738 Fetching value of define "__AVX512F__" : (undefined) 00:01:04.738 Fetching value of define "__AVX512VL__" : (undefined) 00:01:04.738 Fetching value of define "__PCLMUL__" : 1 00:01:04.738 Fetching value of define "__RDRND__" : 1 00:01:04.738 Fetching value of define "__RDSEED__" : (undefined) 00:01:04.738 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:04.738 Fetching value of define "__znver1__" : (undefined) 00:01:04.738 Fetching value of define "__znver2__" : (undefined) 00:01:04.738 Fetching value of define "__znver3__" : (undefined) 00:01:04.738 Fetching value of define "__znver4__" : (undefined) 00:01:04.738 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:04.738 Message: lib/log: Defining dependency "log" 00:01:04.738 Message: lib/kvargs: Defining dependency "kvargs" 00:01:04.738 Message: lib/telemetry: Defining dependency "telemetry" 00:01:04.738 Checking for function "getentropy" : NO 00:01:04.738 Message: lib/eal: Defining dependency "eal" 00:01:04.738 Message: lib/ring: Defining dependency "ring" 00:01:04.738 Message: lib/rcu: Defining dependency "rcu" 00:01:04.738 Message: lib/mempool: Defining dependency "mempool" 00:01:04.738 Message: lib/mbuf: Defining dependency "mbuf" 00:01:04.738 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:04.738 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:04.738 Compiler for C supports arguments -mpclmul: YES 00:01:04.738 Compiler for C supports arguments -maes: YES 00:01:04.738 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:04.738 Compiler for C supports arguments -mavx512bw: YES 00:01:04.738 Compiler for C supports arguments -mavx512dq: YES 00:01:04.738 Compiler for C supports arguments -mavx512vl: YES 00:01:04.738 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:04.738 Compiler for C supports arguments -mavx2: YES 00:01:04.738 Compiler for C supports arguments -mavx: YES 00:01:04.738 Message: lib/net: Defining dependency "net" 00:01:04.738 Message: lib/meter: Defining dependency "meter" 00:01:04.738 Message: lib/ethdev: Defining dependency "ethdev" 00:01:04.738 Message: lib/pci: Defining dependency "pci" 00:01:04.738 Message: lib/cmdline: Defining dependency "cmdline" 00:01:04.738 Message: lib/hash: Defining dependency "hash" 00:01:04.738 Message: lib/timer: Defining dependency "timer" 00:01:04.738 Message: lib/compressdev: Defining dependency "compressdev" 00:01:04.738 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:04.738 Message: lib/dmadev: Defining dependency "dmadev" 00:01:04.738 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:04.738 Message: lib/power: Defining dependency "power" 00:01:04.738 Message: lib/reorder: Defining dependency "reorder" 00:01:04.738 Message: lib/security: Defining dependency "security" 00:01:04.738 Has header "linux/userfaultfd.h" : YES 00:01:04.738 Has header "linux/vduse.h" : YES 00:01:04.738 Message: lib/vhost: Defining dependency "vhost" 00:01:04.738 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:04.738 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:04.738 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:04.738 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:04.738 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:04.738 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:04.738 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:04.738 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:04.738 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:04.739 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:04.739 Program doxygen found: YES (/usr/bin/doxygen) 00:01:04.739 Configuring doxy-api-html.conf using configuration 00:01:04.739 Configuring doxy-api-man.conf using configuration 00:01:04.739 Program mandb found: YES (/usr/bin/mandb) 00:01:04.739 Program sphinx-build found: NO 00:01:04.739 Configuring rte_build_config.h using configuration 00:01:04.739 Message: 00:01:04.739 ================= 00:01:04.739 Applications Enabled 00:01:04.739 ================= 00:01:04.739 00:01:04.739 apps: 00:01:04.739 00:01:04.739 00:01:04.739 Message: 00:01:04.739 ================= 00:01:04.739 Libraries Enabled 00:01:04.739 ================= 00:01:04.739 00:01:04.739 libs: 00:01:04.739 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:04.739 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:04.739 cryptodev, dmadev, power, reorder, security, vhost, 00:01:04.739 00:01:04.739 Message: 00:01:04.739 =============== 00:01:04.739 Drivers Enabled 00:01:04.739 =============== 00:01:04.739 00:01:04.739 common: 00:01:04.739 00:01:04.739 bus: 00:01:04.739 pci, vdev, 00:01:04.739 mempool: 00:01:04.739 ring, 00:01:04.739 dma: 00:01:04.739 00:01:04.739 net: 00:01:04.739 00:01:04.739 crypto: 00:01:04.739 00:01:04.739 compress: 00:01:04.739 00:01:04.739 vdpa: 00:01:04.739 00:01:04.739 00:01:04.739 Message: 00:01:04.739 ================= 00:01:04.739 Content Skipped 00:01:04.739 ================= 00:01:04.739 00:01:04.739 apps: 00:01:04.739 dumpcap: explicitly disabled via build config 00:01:04.739 graph: explicitly disabled via build config 00:01:04.739 pdump: explicitly disabled via build config 00:01:04.739 proc-info: explicitly disabled via build config 00:01:04.739 test-acl: explicitly disabled via build config 00:01:04.739 test-bbdev: explicitly disabled via build config 00:01:04.739 test-cmdline: explicitly disabled via build config 00:01:04.739 test-compress-perf: explicitly disabled via build config 00:01:04.739 test-crypto-perf: explicitly disabled via build config 00:01:04.739 test-dma-perf: explicitly disabled via build config 00:01:04.739 test-eventdev: explicitly disabled via build config 00:01:04.739 test-fib: explicitly disabled via build config 00:01:04.739 test-flow-perf: explicitly disabled via build config 00:01:04.739 test-gpudev: explicitly disabled via build config 00:01:04.739 test-mldev: explicitly disabled via build config 00:01:04.739 test-pipeline: explicitly disabled via build config 00:01:04.739 test-pmd: explicitly disabled via build config 00:01:04.739 test-regex: explicitly disabled via build config 00:01:04.739 test-sad: explicitly disabled via build config 00:01:04.739 test-security-perf: explicitly disabled via build config 00:01:04.739 00:01:04.739 libs: 00:01:04.739 argparse: explicitly disabled via build config 00:01:04.739 metrics: explicitly disabled via build config 00:01:04.739 acl: explicitly disabled via build config 00:01:04.739 bbdev: explicitly disabled via build config 00:01:04.739 bitratestats: explicitly disabled via build config 00:01:04.739 bpf: explicitly disabled via build config 00:01:04.739 cfgfile: explicitly disabled via build config 00:01:04.739 distributor: explicitly disabled via build config 00:01:04.739 efd: explicitly disabled via build config 00:01:04.739 eventdev: explicitly disabled via build config 00:01:04.739 dispatcher: explicitly disabled via build config 00:01:04.739 gpudev: explicitly disabled via build config 00:01:04.739 gro: explicitly disabled via build config 00:01:04.739 gso: explicitly disabled via build config 00:01:04.739 ip_frag: explicitly disabled via build config 00:01:04.739 jobstats: explicitly disabled via build config 00:01:04.739 latencystats: explicitly disabled via build config 00:01:04.739 lpm: explicitly disabled via build config 00:01:04.739 member: explicitly disabled via build config 00:01:04.739 pcapng: explicitly disabled via build config 00:01:04.739 rawdev: explicitly disabled via build config 00:01:04.739 regexdev: explicitly disabled via build config 00:01:04.739 mldev: explicitly disabled via build config 00:01:04.739 rib: explicitly disabled via build config 00:01:04.739 sched: explicitly disabled via build config 00:01:04.739 stack: explicitly disabled via build config 00:01:04.739 ipsec: explicitly disabled via build config 00:01:04.739 pdcp: explicitly disabled via build config 00:01:04.739 fib: explicitly disabled via build config 00:01:04.739 port: explicitly disabled via build config 00:01:04.739 pdump: explicitly disabled via build config 00:01:04.739 table: explicitly disabled via build config 00:01:04.739 pipeline: explicitly disabled via build config 00:01:04.739 graph: explicitly disabled via build config 00:01:04.739 node: explicitly disabled via build config 00:01:04.739 00:01:04.739 drivers: 00:01:04.739 common/cpt: not in enabled drivers build config 00:01:04.739 common/dpaax: not in enabled drivers build config 00:01:04.739 common/iavf: not in enabled drivers build config 00:01:04.739 common/idpf: not in enabled drivers build config 00:01:04.739 common/ionic: not in enabled drivers build config 00:01:04.739 common/mvep: not in enabled drivers build config 00:01:04.739 common/octeontx: not in enabled drivers build config 00:01:04.739 bus/auxiliary: not in enabled drivers build config 00:01:04.739 bus/cdx: not in enabled drivers build config 00:01:04.739 bus/dpaa: not in enabled drivers build config 00:01:04.739 bus/fslmc: not in enabled drivers build config 00:01:04.739 bus/ifpga: not in enabled drivers build config 00:01:04.739 bus/platform: not in enabled drivers build config 00:01:04.739 bus/uacce: not in enabled drivers build config 00:01:04.739 bus/vmbus: not in enabled drivers build config 00:01:04.739 common/cnxk: not in enabled drivers build config 00:01:04.739 common/mlx5: not in enabled drivers build config 00:01:04.739 common/nfp: not in enabled drivers build config 00:01:04.739 common/nitrox: not in enabled drivers build config 00:01:04.739 common/qat: not in enabled drivers build config 00:01:04.739 common/sfc_efx: not in enabled drivers build config 00:01:04.739 mempool/bucket: not in enabled drivers build config 00:01:04.739 mempool/cnxk: not in enabled drivers build config 00:01:04.739 mempool/dpaa: not in enabled drivers build config 00:01:04.739 mempool/dpaa2: not in enabled drivers build config 00:01:04.739 mempool/octeontx: not in enabled drivers build config 00:01:04.739 mempool/stack: not in enabled drivers build config 00:01:04.739 dma/cnxk: not in enabled drivers build config 00:01:04.739 dma/dpaa: not in enabled drivers build config 00:01:04.739 dma/dpaa2: not in enabled drivers build config 00:01:04.739 dma/hisilicon: not in enabled drivers build config 00:01:04.739 dma/idxd: not in enabled drivers build config 00:01:04.739 dma/ioat: not in enabled drivers build config 00:01:04.739 dma/skeleton: not in enabled drivers build config 00:01:04.739 net/af_packet: not in enabled drivers build config 00:01:04.739 net/af_xdp: not in enabled drivers build config 00:01:04.739 net/ark: not in enabled drivers build config 00:01:04.739 net/atlantic: not in enabled drivers build config 00:01:04.739 net/avp: not in enabled drivers build config 00:01:04.739 net/axgbe: not in enabled drivers build config 00:01:04.739 net/bnx2x: not in enabled drivers build config 00:01:04.739 net/bnxt: not in enabled drivers build config 00:01:04.739 net/bonding: not in enabled drivers build config 00:01:04.739 net/cnxk: not in enabled drivers build config 00:01:04.739 net/cpfl: not in enabled drivers build config 00:01:04.739 net/cxgbe: not in enabled drivers build config 00:01:04.739 net/dpaa: not in enabled drivers build config 00:01:04.739 net/dpaa2: not in enabled drivers build config 00:01:04.739 net/e1000: not in enabled drivers build config 00:01:04.739 net/ena: not in enabled drivers build config 00:01:04.739 net/enetc: not in enabled drivers build config 00:01:04.739 net/enetfec: not in enabled drivers build config 00:01:04.739 net/enic: not in enabled drivers build config 00:01:04.739 net/failsafe: not in enabled drivers build config 00:01:04.739 net/fm10k: not in enabled drivers build config 00:01:04.739 net/gve: not in enabled drivers build config 00:01:04.739 net/hinic: not in enabled drivers build config 00:01:04.739 net/hns3: not in enabled drivers build config 00:01:04.739 net/i40e: not in enabled drivers build config 00:01:04.739 net/iavf: not in enabled drivers build config 00:01:04.739 net/ice: not in enabled drivers build config 00:01:04.739 net/idpf: not in enabled drivers build config 00:01:04.739 net/igc: not in enabled drivers build config 00:01:04.739 net/ionic: not in enabled drivers build config 00:01:04.739 net/ipn3ke: not in enabled drivers build config 00:01:04.739 net/ixgbe: not in enabled drivers build config 00:01:04.739 net/mana: not in enabled drivers build config 00:01:04.739 net/memif: not in enabled drivers build config 00:01:04.739 net/mlx4: not in enabled drivers build config 00:01:04.739 net/mlx5: not in enabled drivers build config 00:01:04.739 net/mvneta: not in enabled drivers build config 00:01:04.739 net/mvpp2: not in enabled drivers build config 00:01:04.739 net/netvsc: not in enabled drivers build config 00:01:04.739 net/nfb: not in enabled drivers build config 00:01:04.739 net/nfp: not in enabled drivers build config 00:01:04.739 net/ngbe: not in enabled drivers build config 00:01:04.739 net/null: not in enabled drivers build config 00:01:04.739 net/octeontx: not in enabled drivers build config 00:01:04.739 net/octeon_ep: not in enabled drivers build config 00:01:04.739 net/pcap: not in enabled drivers build config 00:01:04.739 net/pfe: not in enabled drivers build config 00:01:04.739 net/qede: not in enabled drivers build config 00:01:04.739 net/ring: not in enabled drivers build config 00:01:04.739 net/sfc: not in enabled drivers build config 00:01:04.739 net/softnic: not in enabled drivers build config 00:01:04.739 net/tap: not in enabled drivers build config 00:01:04.739 net/thunderx: not in enabled drivers build config 00:01:04.739 net/txgbe: not in enabled drivers build config 00:01:04.739 net/vdev_netvsc: not in enabled drivers build config 00:01:04.739 net/vhost: not in enabled drivers build config 00:01:04.739 net/virtio: not in enabled drivers build config 00:01:04.739 net/vmxnet3: not in enabled drivers build config 00:01:04.740 raw/*: missing internal dependency, "rawdev" 00:01:04.740 crypto/armv8: not in enabled drivers build config 00:01:04.740 crypto/bcmfs: not in enabled drivers build config 00:01:04.740 crypto/caam_jr: not in enabled drivers build config 00:01:04.740 crypto/ccp: not in enabled drivers build config 00:01:04.740 crypto/cnxk: not in enabled drivers build config 00:01:04.740 crypto/dpaa_sec: not in enabled drivers build config 00:01:04.740 crypto/dpaa2_sec: not in enabled drivers build config 00:01:04.740 crypto/ipsec_mb: not in enabled drivers build config 00:01:04.740 crypto/mlx5: not in enabled drivers build config 00:01:04.740 crypto/mvsam: not in enabled drivers build config 00:01:04.740 crypto/nitrox: not in enabled drivers build config 00:01:04.740 crypto/null: not in enabled drivers build config 00:01:04.740 crypto/octeontx: not in enabled drivers build config 00:01:04.740 crypto/openssl: not in enabled drivers build config 00:01:04.740 crypto/scheduler: not in enabled drivers build config 00:01:04.740 crypto/uadk: not in enabled drivers build config 00:01:04.740 crypto/virtio: not in enabled drivers build config 00:01:04.740 compress/isal: not in enabled drivers build config 00:01:04.740 compress/mlx5: not in enabled drivers build config 00:01:04.740 compress/nitrox: not in enabled drivers build config 00:01:04.740 compress/octeontx: not in enabled drivers build config 00:01:04.740 compress/zlib: not in enabled drivers build config 00:01:04.740 regex/*: missing internal dependency, "regexdev" 00:01:04.740 ml/*: missing internal dependency, "mldev" 00:01:04.740 vdpa/ifc: not in enabled drivers build config 00:01:04.740 vdpa/mlx5: not in enabled drivers build config 00:01:04.740 vdpa/nfp: not in enabled drivers build config 00:01:04.740 vdpa/sfc: not in enabled drivers build config 00:01:04.740 event/*: missing internal dependency, "eventdev" 00:01:04.740 baseband/*: missing internal dependency, "bbdev" 00:01:04.740 gpu/*: missing internal dependency, "gpudev" 00:01:04.740 00:01:04.740 00:01:04.740 Build targets in project: 85 00:01:04.740 00:01:04.740 DPDK 24.03.0 00:01:04.740 00:01:04.740 User defined options 00:01:04.740 buildtype : debug 00:01:04.740 default_library : shared 00:01:04.740 libdir : lib 00:01:04.740 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:04.740 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:04.740 c_link_args : 00:01:04.740 cpu_instruction_set: native 00:01:04.740 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:04.740 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:04.740 enable_docs : false 00:01:04.740 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:04.740 enable_kmods : false 00:01:04.740 max_lcores : 128 00:01:04.740 tests : false 00:01:04.740 00:01:04.740 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.740 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:04.740 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:04.740 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:04.740 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:04.740 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:04.740 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:04.740 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:04.740 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:04.740 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:04.740 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:04.740 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:04.740 [11/268] Linking static target lib/librte_kvargs.a 00:01:04.999 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:04.999 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:04.999 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:04.999 [15/268] Linking static target lib/librte_log.a 00:01:04.999 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:05.577 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.577 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:05.577 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:05.577 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:05.577 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:05.577 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:05.577 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:05.577 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:05.577 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:05.577 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:05.577 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:05.577 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:05.841 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:05.841 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:05.841 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:05.841 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:05.841 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:05.841 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:05.841 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:05.841 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:05.841 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:05.841 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:05.841 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:05.841 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:05.841 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:05.841 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:05.841 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:05.841 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:05.841 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:05.841 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:05.841 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:05.841 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:05.841 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:05.841 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:05.841 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:05.841 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:05.841 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:05.841 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:05.841 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:05.841 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:05.841 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:05.841 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:05.841 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:05.841 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:06.099 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:06.099 [62/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:06.099 [63/268] Linking static target lib/librte_telemetry.a 00:01:06.099 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.099 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:06.099 [66/268] Linking target lib/librte_log.so.24.1 00:01:06.099 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:06.360 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:06.360 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:06.360 [70/268] Linking static target lib/librte_pci.a 00:01:06.360 [71/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:06.623 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:06.623 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:06.623 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:06.623 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:06.623 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:06.623 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:06.623 [78/268] Linking target lib/librte_kvargs.so.24.1 00:01:06.623 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:06.623 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:06.623 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:06.623 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:06.623 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:06.623 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:06.623 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:06.623 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:06.623 [87/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:06.623 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:06.888 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:06.888 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:06.888 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:06.888 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:06.888 [93/268] Linking static target lib/librte_meter.a 00:01:06.888 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:06.888 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:06.888 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:06.888 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:06.888 [98/268] Linking static target lib/librte_ring.a 00:01:06.888 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:06.888 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:06.888 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:06.888 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:06.888 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:06.888 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:06.888 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:06.888 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:06.888 [107/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:06.888 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:06.888 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:06.888 [110/268] Linking static target lib/librte_eal.a 00:01:06.888 [111/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.888 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:06.888 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:06.888 [114/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:06.888 [115/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:06.888 [116/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.888 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:06.888 [118/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:06.888 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:06.888 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:06.888 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:07.190 [122/268] Linking static target lib/librte_rcu.a 00:01:07.190 [123/268] Linking static target lib/librte_mempool.a 00:01:07.190 [124/268] Linking target lib/librte_telemetry.so.24.1 00:01:07.190 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:07.190 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:07.190 [127/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:07.190 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:07.190 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:07.190 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:07.190 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:07.190 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:07.190 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:07.190 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.467 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:07.467 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:07.467 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:07.467 [138/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:07.467 [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:07.467 [140/268] Linking static target lib/librte_net.a 00:01:07.467 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.467 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:07.731 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.731 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.731 [145/268] Linking static target lib/librte_cmdline.a 00:01:07.731 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:07.731 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:07.731 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.731 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:07.731 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:07.731 [151/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:07.731 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:07.731 [153/268] Linking static target lib/librte_timer.a 00:01:07.731 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:07.731 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:07.731 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:07.990 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:07.990 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:07.990 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:07.990 [160/268] Linking static target lib/librte_dmadev.a 00:01:07.990 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.990 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:07.990 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:07.990 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:07.990 [165/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:07.990 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.248 [167/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:08.248 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:08.248 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:08.248 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:08.248 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.248 [172/268] Linking static target lib/librte_power.a 00:01:08.248 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:08.248 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:08.248 [175/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:08.248 [176/268] Linking static target lib/librte_compressdev.a 00:01:08.248 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:08.248 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:08.248 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:08.248 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.248 [181/268] Linking static target lib/librte_hash.a 00:01:08.248 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.248 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:08.505 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:08.505 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.505 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:08.505 [187/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:08.505 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:08.505 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:08.505 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:08.505 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:08.505 [192/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.505 [193/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:08.505 [194/268] Linking static target lib/librte_mbuf.a 00:01:08.505 [195/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.505 [196/268] Linking static target lib/librte_reorder.a 00:01:08.505 [197/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:08.505 [198/268] Linking static target lib/librte_security.a 00:01:08.763 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:08.763 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.763 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.763 [202/268] Linking static target drivers/librte_bus_vdev.a 00:01:08.763 [203/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.763 [204/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.763 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:08.763 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.763 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.763 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:08.763 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:08.763 [210/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:08.763 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.763 [212/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:08.763 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.763 [214/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:09.021 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.021 [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.021 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:09.021 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.021 [219/268] Linking static target lib/librte_ethdev.a 00:01:09.021 [220/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:09.021 [221/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:09.021 [222/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:09.021 [223/268] Linking static target drivers/librte_mempool_ring.a 00:01:09.279 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.279 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:09.279 [226/268] Linking static target lib/librte_cryptodev.a 00:01:10.234 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.607 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:13.505 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.505 [230/268] Linking target lib/librte_eal.so.24.1 00:01:13.505 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.505 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:13.505 [233/268] Linking target lib/librte_ring.so.24.1 00:01:13.505 [234/268] Linking target lib/librte_pci.so.24.1 00:01:13.505 [235/268] Linking target lib/librte_meter.so.24.1 00:01:13.505 [236/268] Linking target lib/librte_timer.so.24.1 00:01:13.505 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:13.505 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:13.505 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:13.505 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:13.505 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:13.505 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:13.505 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:13.505 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:13.505 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:13.505 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:13.763 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:13.763 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:13.763 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:13.763 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:13.763 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:14.021 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:14.022 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:14.022 [254/268] Linking target lib/librte_net.so.24.1 00:01:14.022 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:14.022 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:14.022 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:14.022 [258/268] Linking target lib/librte_security.so.24.1 00:01:14.022 [259/268] Linking target lib/librte_hash.so.24.1 00:01:14.022 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:14.022 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:14.280 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:14.280 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:14.280 [264/268] Linking target lib/librte_power.so.24.1 00:01:16.811 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:16.811 [266/268] Linking static target lib/librte_vhost.a 00:01:17.746 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.746 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:18.004 INFO: autodetecting backend as ninja 00:01:18.004 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:18.939 CC lib/ut_mock/mock.o 00:01:18.939 CC lib/log/log.o 00:01:18.939 CC lib/log/log_flags.o 00:01:18.939 CC lib/log/log_deprecated.o 00:01:18.939 CC lib/ut/ut.o 00:01:18.939 LIB libspdk_ut_mock.a 00:01:18.939 LIB libspdk_log.a 00:01:18.939 LIB libspdk_ut.a 00:01:18.939 SO libspdk_ut_mock.so.6.0 00:01:18.939 SO libspdk_log.so.7.0 00:01:18.939 SO libspdk_ut.so.2.0 00:01:18.939 SYMLINK libspdk_ut_mock.so 00:01:18.939 SYMLINK libspdk_ut.so 00:01:18.939 SYMLINK libspdk_log.so 00:01:19.198 CC lib/dma/dma.o 00:01:19.198 CXX lib/trace_parser/trace.o 00:01:19.198 CC lib/util/base64.o 00:01:19.198 CC lib/util/bit_array.o 00:01:19.198 CC lib/util/cpuset.o 00:01:19.198 CC lib/util/crc16.o 00:01:19.198 CC lib/util/crc32.o 00:01:19.198 CC lib/ioat/ioat.o 00:01:19.198 CC lib/util/crc32c.o 00:01:19.198 CC lib/util/crc32_ieee.o 00:01:19.198 CC lib/util/crc64.o 00:01:19.198 CC lib/util/dif.o 00:01:19.198 CC lib/util/fd.o 00:01:19.198 CC lib/util/file.o 00:01:19.198 CC lib/util/hexlify.o 00:01:19.198 CC lib/util/iov.o 00:01:19.198 CC lib/util/math.o 00:01:19.198 CC lib/util/pipe.o 00:01:19.198 CC lib/util/strerror_tls.o 00:01:19.198 CC lib/util/string.o 00:01:19.198 CC lib/util/uuid.o 00:01:19.198 CC lib/util/fd_group.o 00:01:19.198 CC lib/util/xor.o 00:01:19.198 CC lib/util/zipf.o 00:01:19.198 CC lib/vfio_user/host/vfio_user_pci.o 00:01:19.198 CC lib/vfio_user/host/vfio_user.o 00:01:19.457 LIB libspdk_dma.a 00:01:19.457 SO libspdk_dma.so.4.0 00:01:19.457 LIB libspdk_ioat.a 00:01:19.457 SO libspdk_ioat.so.7.0 00:01:19.457 SYMLINK libspdk_dma.so 00:01:19.457 SYMLINK libspdk_ioat.so 00:01:19.714 LIB libspdk_vfio_user.a 00:01:19.714 SO libspdk_vfio_user.so.5.0 00:01:19.714 SYMLINK libspdk_vfio_user.so 00:01:19.714 LIB libspdk_util.a 00:01:19.714 SO libspdk_util.so.9.1 00:01:19.971 SYMLINK libspdk_util.so 00:01:20.229 CC lib/conf/conf.o 00:01:20.229 CC lib/vmd/vmd.o 00:01:20.229 CC lib/json/json_parse.o 00:01:20.229 CC lib/env_dpdk/env.o 00:01:20.229 CC lib/rdma_utils/rdma_utils.o 00:01:20.229 CC lib/vmd/led.o 00:01:20.229 CC lib/idxd/idxd.o 00:01:20.229 CC lib/rdma_provider/common.o 00:01:20.229 CC lib/json/json_util.o 00:01:20.229 CC lib/env_dpdk/memory.o 00:01:20.229 CC lib/idxd/idxd_user.o 00:01:20.229 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:20.229 CC lib/json/json_write.o 00:01:20.229 CC lib/env_dpdk/pci.o 00:01:20.229 CC lib/idxd/idxd_kernel.o 00:01:20.229 CC lib/env_dpdk/init.o 00:01:20.229 CC lib/env_dpdk/threads.o 00:01:20.229 CC lib/env_dpdk/pci_ioat.o 00:01:20.229 CC lib/env_dpdk/pci_virtio.o 00:01:20.229 CC lib/env_dpdk/pci_vmd.o 00:01:20.229 CC lib/env_dpdk/pci_idxd.o 00:01:20.229 CC lib/env_dpdk/pci_event.o 00:01:20.229 CC lib/env_dpdk/sigbus_handler.o 00:01:20.229 CC lib/env_dpdk/pci_dpdk.o 00:01:20.229 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:20.229 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:20.229 LIB libspdk_trace_parser.a 00:01:20.229 SO libspdk_trace_parser.so.5.0 00:01:20.229 SYMLINK libspdk_trace_parser.so 00:01:20.229 LIB libspdk_rdma_provider.a 00:01:20.486 SO libspdk_rdma_provider.so.6.0 00:01:20.486 LIB libspdk_conf.a 00:01:20.486 SO libspdk_conf.so.6.0 00:01:20.486 SYMLINK libspdk_rdma_provider.so 00:01:20.486 LIB libspdk_rdma_utils.a 00:01:20.486 LIB libspdk_json.a 00:01:20.486 SYMLINK libspdk_conf.so 00:01:20.486 SO libspdk_rdma_utils.so.1.0 00:01:20.486 SO libspdk_json.so.6.0 00:01:20.486 SYMLINK libspdk_rdma_utils.so 00:01:20.486 SYMLINK libspdk_json.so 00:01:20.744 CC lib/jsonrpc/jsonrpc_server.o 00:01:20.744 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:20.744 CC lib/jsonrpc/jsonrpc_client.o 00:01:20.744 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:20.744 LIB libspdk_idxd.a 00:01:20.744 SO libspdk_idxd.so.12.0 00:01:20.744 LIB libspdk_vmd.a 00:01:20.744 SYMLINK libspdk_idxd.so 00:01:20.744 SO libspdk_vmd.so.6.0 00:01:21.002 SYMLINK libspdk_vmd.so 00:01:21.002 LIB libspdk_jsonrpc.a 00:01:21.002 SO libspdk_jsonrpc.so.6.0 00:01:21.002 SYMLINK libspdk_jsonrpc.so 00:01:21.259 CC lib/rpc/rpc.o 00:01:21.516 LIB libspdk_rpc.a 00:01:21.516 SO libspdk_rpc.so.6.0 00:01:21.516 SYMLINK libspdk_rpc.so 00:01:21.773 CC lib/trace/trace.o 00:01:21.773 CC lib/keyring/keyring.o 00:01:21.773 CC lib/notify/notify.o 00:01:21.773 CC lib/trace/trace_flags.o 00:01:21.773 CC lib/keyring/keyring_rpc.o 00:01:21.773 CC lib/notify/notify_rpc.o 00:01:21.773 CC lib/trace/trace_rpc.o 00:01:21.773 LIB libspdk_notify.a 00:01:21.773 SO libspdk_notify.so.6.0 00:01:22.030 LIB libspdk_keyring.a 00:01:22.030 SYMLINK libspdk_notify.so 00:01:22.030 LIB libspdk_trace.a 00:01:22.030 SO libspdk_keyring.so.1.0 00:01:22.030 SO libspdk_trace.so.10.0 00:01:22.030 SYMLINK libspdk_keyring.so 00:01:22.030 SYMLINK libspdk_trace.so 00:01:22.030 LIB libspdk_env_dpdk.a 00:01:22.287 CC lib/thread/thread.o 00:01:22.287 CC lib/thread/iobuf.o 00:01:22.287 SO libspdk_env_dpdk.so.14.1 00:01:22.287 CC lib/sock/sock.o 00:01:22.287 CC lib/sock/sock_rpc.o 00:01:22.287 SYMLINK libspdk_env_dpdk.so 00:01:22.579 LIB libspdk_sock.a 00:01:22.579 SO libspdk_sock.so.10.0 00:01:22.579 SYMLINK libspdk_sock.so 00:01:22.842 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:22.842 CC lib/nvme/nvme_ctrlr.o 00:01:22.842 CC lib/nvme/nvme_fabric.o 00:01:22.842 CC lib/nvme/nvme_ns_cmd.o 00:01:22.842 CC lib/nvme/nvme_ns.o 00:01:22.842 CC lib/nvme/nvme_pcie_common.o 00:01:22.842 CC lib/nvme/nvme_pcie.o 00:01:22.842 CC lib/nvme/nvme_qpair.o 00:01:22.842 CC lib/nvme/nvme.o 00:01:22.842 CC lib/nvme/nvme_quirks.o 00:01:22.842 CC lib/nvme/nvme_transport.o 00:01:22.842 CC lib/nvme/nvme_discovery.o 00:01:22.842 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:22.842 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:22.842 CC lib/nvme/nvme_tcp.o 00:01:22.842 CC lib/nvme/nvme_opal.o 00:01:22.842 CC lib/nvme/nvme_io_msg.o 00:01:22.842 CC lib/nvme/nvme_poll_group.o 00:01:22.842 CC lib/nvme/nvme_zns.o 00:01:22.842 CC lib/nvme/nvme_stubs.o 00:01:22.842 CC lib/nvme/nvme_auth.o 00:01:22.842 CC lib/nvme/nvme_cuse.o 00:01:22.842 CC lib/nvme/nvme_vfio_user.o 00:01:22.842 CC lib/nvme/nvme_rdma.o 00:01:23.775 LIB libspdk_thread.a 00:01:23.775 SO libspdk_thread.so.10.1 00:01:23.775 SYMLINK libspdk_thread.so 00:01:24.033 CC lib/vfu_tgt/tgt_endpoint.o 00:01:24.033 CC lib/virtio/virtio.o 00:01:24.033 CC lib/accel/accel.o 00:01:24.033 CC lib/init/json_config.o 00:01:24.033 CC lib/blob/blobstore.o 00:01:24.033 CC lib/virtio/virtio_vhost_user.o 00:01:24.033 CC lib/vfu_tgt/tgt_rpc.o 00:01:24.033 CC lib/blob/request.o 00:01:24.033 CC lib/accel/accel_rpc.o 00:01:24.033 CC lib/init/subsystem.o 00:01:24.033 CC lib/virtio/virtio_vfio_user.o 00:01:24.033 CC lib/blob/zeroes.o 00:01:24.033 CC lib/init/subsystem_rpc.o 00:01:24.033 CC lib/virtio/virtio_pci.o 00:01:24.033 CC lib/accel/accel_sw.o 00:01:24.033 CC lib/blob/blob_bs_dev.o 00:01:24.033 CC lib/init/rpc.o 00:01:24.291 LIB libspdk_init.a 00:01:24.291 SO libspdk_init.so.5.0 00:01:24.291 LIB libspdk_virtio.a 00:01:24.291 LIB libspdk_vfu_tgt.a 00:01:24.291 SYMLINK libspdk_init.so 00:01:24.291 SO libspdk_vfu_tgt.so.3.0 00:01:24.291 SO libspdk_virtio.so.7.0 00:01:24.291 SYMLINK libspdk_vfu_tgt.so 00:01:24.291 SYMLINK libspdk_virtio.so 00:01:24.550 CC lib/event/app.o 00:01:24.550 CC lib/event/reactor.o 00:01:24.550 CC lib/event/log_rpc.o 00:01:24.550 CC lib/event/app_rpc.o 00:01:24.550 CC lib/event/scheduler_static.o 00:01:24.808 LIB libspdk_event.a 00:01:24.808 SO libspdk_event.so.14.0 00:01:25.066 SYMLINK libspdk_event.so 00:01:25.067 LIB libspdk_accel.a 00:01:25.067 SO libspdk_accel.so.15.1 00:01:25.067 SYMLINK libspdk_accel.so 00:01:25.326 LIB libspdk_nvme.a 00:01:25.326 CC lib/bdev/bdev.o 00:01:25.326 CC lib/bdev/bdev_rpc.o 00:01:25.326 CC lib/bdev/bdev_zone.o 00:01:25.326 CC lib/bdev/part.o 00:01:25.326 CC lib/bdev/scsi_nvme.o 00:01:25.326 SO libspdk_nvme.so.13.1 00:01:25.584 SYMLINK libspdk_nvme.so 00:01:26.960 LIB libspdk_blob.a 00:01:26.960 SO libspdk_blob.so.11.0 00:01:26.960 SYMLINK libspdk_blob.so 00:01:27.218 CC lib/blobfs/blobfs.o 00:01:27.218 CC lib/blobfs/tree.o 00:01:27.218 CC lib/lvol/lvol.o 00:01:27.786 LIB libspdk_bdev.a 00:01:27.786 SO libspdk_bdev.so.15.1 00:01:27.786 SYMLINK libspdk_bdev.so 00:01:28.055 LIB libspdk_blobfs.a 00:01:28.055 SO libspdk_blobfs.so.10.0 00:01:28.055 CC lib/scsi/dev.o 00:01:28.055 CC lib/nvmf/ctrlr.o 00:01:28.055 CC lib/scsi/lun.o 00:01:28.055 CC lib/nbd/nbd.o 00:01:28.055 CC lib/nvmf/ctrlr_discovery.o 00:01:28.055 CC lib/ublk/ublk.o 00:01:28.055 CC lib/scsi/port.o 00:01:28.055 CC lib/nbd/nbd_rpc.o 00:01:28.055 CC lib/ublk/ublk_rpc.o 00:01:28.055 CC lib/scsi/scsi.o 00:01:28.055 CC lib/ftl/ftl_core.o 00:01:28.055 CC lib/nvmf/ctrlr_bdev.o 00:01:28.055 CC lib/ftl/ftl_init.o 00:01:28.055 CC lib/scsi/scsi_bdev.o 00:01:28.055 CC lib/ftl/ftl_layout.o 00:01:28.055 CC lib/nvmf/subsystem.o 00:01:28.055 CC lib/scsi/scsi_pr.o 00:01:28.055 CC lib/nvmf/nvmf.o 00:01:28.055 CC lib/ftl/ftl_debug.o 00:01:28.055 CC lib/nvmf/nvmf_rpc.o 00:01:28.055 CC lib/scsi/scsi_rpc.o 00:01:28.055 CC lib/ftl/ftl_io.o 00:01:28.055 CC lib/scsi/task.o 00:01:28.055 CC lib/ftl/ftl_sb.o 00:01:28.055 CC lib/nvmf/transport.o 00:01:28.055 CC lib/nvmf/tcp.o 00:01:28.055 CC lib/nvmf/stubs.o 00:01:28.055 CC lib/ftl/ftl_l2p_flat.o 00:01:28.055 CC lib/ftl/ftl_l2p.o 00:01:28.055 CC lib/nvmf/mdns_server.o 00:01:28.055 CC lib/ftl/ftl_nv_cache.o 00:01:28.055 CC lib/nvmf/vfio_user.o 00:01:28.055 CC lib/ftl/ftl_band.o 00:01:28.055 CC lib/nvmf/rdma.o 00:01:28.055 CC lib/ftl/ftl_band_ops.o 00:01:28.055 CC lib/nvmf/auth.o 00:01:28.055 CC lib/ftl/ftl_writer.o 00:01:28.055 CC lib/ftl/ftl_rq.o 00:01:28.055 CC lib/ftl/ftl_reloc.o 00:01:28.055 CC lib/ftl/ftl_l2p_cache.o 00:01:28.055 CC lib/ftl/ftl_p2l.o 00:01:28.055 CC lib/ftl/mngt/ftl_mngt.o 00:01:28.055 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:28.055 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:28.055 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:28.055 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:28.055 SYMLINK libspdk_blobfs.so 00:01:28.055 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:28.055 LIB libspdk_lvol.a 00:01:28.314 SO libspdk_lvol.so.10.0 00:01:28.314 SYMLINK libspdk_lvol.so 00:01:28.314 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:28.314 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:28.314 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:28.314 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:28.576 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:28.576 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:28.576 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:28.576 CC lib/ftl/utils/ftl_conf.o 00:01:28.576 CC lib/ftl/utils/ftl_md.o 00:01:28.576 CC lib/ftl/utils/ftl_mempool.o 00:01:28.576 CC lib/ftl/utils/ftl_bitmap.o 00:01:28.576 CC lib/ftl/utils/ftl_property.o 00:01:28.576 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:28.576 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:28.576 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:28.576 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:28.576 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:28.576 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:28.576 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:28.576 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:28.576 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:28.576 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:28.576 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:28.836 CC lib/ftl/base/ftl_base_dev.o 00:01:28.836 CC lib/ftl/base/ftl_base_bdev.o 00:01:28.836 CC lib/ftl/ftl_trace.o 00:01:28.836 LIB libspdk_nbd.a 00:01:28.836 SO libspdk_nbd.so.7.0 00:01:29.094 SYMLINK libspdk_nbd.so 00:01:29.094 LIB libspdk_scsi.a 00:01:29.094 SO libspdk_scsi.so.9.0 00:01:29.094 LIB libspdk_ublk.a 00:01:29.094 SYMLINK libspdk_scsi.so 00:01:29.094 SO libspdk_ublk.so.3.0 00:01:29.352 SYMLINK libspdk_ublk.so 00:01:29.352 CC lib/vhost/vhost.o 00:01:29.352 CC lib/iscsi/conn.o 00:01:29.352 CC lib/vhost/vhost_rpc.o 00:01:29.352 CC lib/vhost/vhost_scsi.o 00:01:29.352 CC lib/iscsi/init_grp.o 00:01:29.352 CC lib/iscsi/iscsi.o 00:01:29.352 CC lib/vhost/vhost_blk.o 00:01:29.352 CC lib/iscsi/md5.o 00:01:29.352 CC lib/iscsi/param.o 00:01:29.352 CC lib/vhost/rte_vhost_user.o 00:01:29.352 CC lib/iscsi/portal_grp.o 00:01:29.352 CC lib/iscsi/tgt_node.o 00:01:29.352 CC lib/iscsi/iscsi_subsystem.o 00:01:29.352 CC lib/iscsi/iscsi_rpc.o 00:01:29.352 CC lib/iscsi/task.o 00:01:29.610 LIB libspdk_ftl.a 00:01:29.610 SO libspdk_ftl.so.9.0 00:01:30.175 SYMLINK libspdk_ftl.so 00:01:30.433 LIB libspdk_vhost.a 00:01:30.433 SO libspdk_vhost.so.8.0 00:01:30.691 LIB libspdk_nvmf.a 00:01:30.691 SO libspdk_nvmf.so.19.0 00:01:30.691 SYMLINK libspdk_vhost.so 00:01:30.691 LIB libspdk_iscsi.a 00:01:30.691 SO libspdk_iscsi.so.8.0 00:01:30.949 SYMLINK libspdk_nvmf.so 00:01:30.949 SYMLINK libspdk_iscsi.so 00:01:31.208 CC module/vfu_device/vfu_virtio.o 00:01:31.208 CC module/vfu_device/vfu_virtio_blk.o 00:01:31.208 CC module/vfu_device/vfu_virtio_scsi.o 00:01:31.208 CC module/vfu_device/vfu_virtio_rpc.o 00:01:31.208 CC module/env_dpdk/env_dpdk_rpc.o 00:01:31.208 CC module/accel/error/accel_error.o 00:01:31.208 CC module/keyring/file/keyring.o 00:01:31.208 CC module/blob/bdev/blob_bdev.o 00:01:31.208 CC module/accel/error/accel_error_rpc.o 00:01:31.208 CC module/sock/posix/posix.o 00:01:31.208 CC module/keyring/file/keyring_rpc.o 00:01:31.208 CC module/accel/ioat/accel_ioat.o 00:01:31.208 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:31.208 CC module/accel/iaa/accel_iaa.o 00:01:31.208 CC module/accel/dsa/accel_dsa.o 00:01:31.208 CC module/accel/iaa/accel_iaa_rpc.o 00:01:31.208 CC module/accel/ioat/accel_ioat_rpc.o 00:01:31.208 CC module/scheduler/gscheduler/gscheduler.o 00:01:31.208 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:31.208 CC module/accel/dsa/accel_dsa_rpc.o 00:01:31.208 CC module/keyring/linux/keyring.o 00:01:31.208 CC module/keyring/linux/keyring_rpc.o 00:01:31.467 LIB libspdk_env_dpdk_rpc.a 00:01:31.467 SO libspdk_env_dpdk_rpc.so.6.0 00:01:31.467 SYMLINK libspdk_env_dpdk_rpc.so 00:01:31.467 LIB libspdk_keyring_linux.a 00:01:31.467 LIB libspdk_keyring_file.a 00:01:31.467 LIB libspdk_scheduler_gscheduler.a 00:01:31.467 LIB libspdk_scheduler_dpdk_governor.a 00:01:31.467 SO libspdk_keyring_linux.so.1.0 00:01:31.467 SO libspdk_keyring_file.so.1.0 00:01:31.467 SO libspdk_scheduler_gscheduler.so.4.0 00:01:31.467 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:31.467 LIB libspdk_accel_error.a 00:01:31.467 LIB libspdk_accel_ioat.a 00:01:31.467 LIB libspdk_scheduler_dynamic.a 00:01:31.467 LIB libspdk_accel_iaa.a 00:01:31.467 SO libspdk_accel_error.so.2.0 00:01:31.467 SO libspdk_accel_ioat.so.6.0 00:01:31.467 SO libspdk_scheduler_dynamic.so.4.0 00:01:31.467 SYMLINK libspdk_keyring_file.so 00:01:31.467 SYMLINK libspdk_keyring_linux.so 00:01:31.467 SYMLINK libspdk_scheduler_gscheduler.so 00:01:31.467 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:31.467 SO libspdk_accel_iaa.so.3.0 00:01:31.726 LIB libspdk_accel_dsa.a 00:01:31.726 SYMLINK libspdk_accel_error.so 00:01:31.726 LIB libspdk_blob_bdev.a 00:01:31.726 SYMLINK libspdk_accel_ioat.so 00:01:31.726 SYMLINK libspdk_scheduler_dynamic.so 00:01:31.726 SYMLINK libspdk_accel_iaa.so 00:01:31.726 SO libspdk_accel_dsa.so.5.0 00:01:31.726 SO libspdk_blob_bdev.so.11.0 00:01:31.726 SYMLINK libspdk_blob_bdev.so 00:01:31.726 SYMLINK libspdk_accel_dsa.so 00:01:31.985 LIB libspdk_vfu_device.a 00:01:31.985 SO libspdk_vfu_device.so.3.0 00:01:31.985 CC module/bdev/nvme/bdev_nvme.o 00:01:31.985 CC module/bdev/delay/vbdev_delay.o 00:01:31.985 CC module/blobfs/bdev/blobfs_bdev.o 00:01:31.985 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:31.985 CC module/bdev/malloc/bdev_malloc.o 00:01:31.985 CC module/bdev/nvme/nvme_rpc.o 00:01:31.985 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:31.985 CC module/bdev/error/vbdev_error.o 00:01:31.985 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:31.985 CC module/bdev/nvme/bdev_mdns_client.o 00:01:31.985 CC module/bdev/null/bdev_null.o 00:01:31.985 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:31.985 CC module/bdev/error/vbdev_error_rpc.o 00:01:31.985 CC module/bdev/lvol/vbdev_lvol.o 00:01:31.985 CC module/bdev/null/bdev_null_rpc.o 00:01:31.985 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:31.985 CC module/bdev/nvme/vbdev_opal.o 00:01:31.985 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:31.985 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:31.985 CC module/bdev/split/vbdev_split.o 00:01:31.985 CC module/bdev/gpt/gpt.o 00:01:31.985 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:31.985 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:31.985 CC module/bdev/ftl/bdev_ftl.o 00:01:31.985 CC module/bdev/gpt/vbdev_gpt.o 00:01:31.985 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:31.985 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:31.985 CC module/bdev/split/vbdev_split_rpc.o 00:01:31.985 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:31.985 CC module/bdev/passthru/vbdev_passthru.o 00:01:31.985 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:31.985 CC module/bdev/raid/bdev_raid.o 00:01:31.985 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:31.985 CC module/bdev/aio/bdev_aio.o 00:01:31.985 CC module/bdev/raid/bdev_raid_rpc.o 00:01:31.985 CC module/bdev/iscsi/bdev_iscsi.o 00:01:31.985 CC module/bdev/aio/bdev_aio_rpc.o 00:01:31.985 CC module/bdev/raid/bdev_raid_sb.o 00:01:31.985 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:31.985 CC module/bdev/raid/raid0.o 00:01:31.985 CC module/bdev/raid/raid1.o 00:01:31.985 CC module/bdev/raid/concat.o 00:01:31.985 SYMLINK libspdk_vfu_device.so 00:01:32.244 LIB libspdk_sock_posix.a 00:01:32.244 SO libspdk_sock_posix.so.6.0 00:01:32.244 LIB libspdk_blobfs_bdev.a 00:01:32.244 SO libspdk_blobfs_bdev.so.6.0 00:01:32.244 SYMLINK libspdk_sock_posix.so 00:01:32.501 LIB libspdk_bdev_split.a 00:01:32.501 LIB libspdk_bdev_ftl.a 00:01:32.501 SYMLINK libspdk_blobfs_bdev.so 00:01:32.501 SO libspdk_bdev_split.so.6.0 00:01:32.501 LIB libspdk_bdev_null.a 00:01:32.501 LIB libspdk_bdev_passthru.a 00:01:32.501 SO libspdk_bdev_ftl.so.6.0 00:01:32.501 LIB libspdk_bdev_error.a 00:01:32.501 LIB libspdk_bdev_gpt.a 00:01:32.501 SO libspdk_bdev_null.so.6.0 00:01:32.501 SO libspdk_bdev_passthru.so.6.0 00:01:32.501 SO libspdk_bdev_error.so.6.0 00:01:32.501 SYMLINK libspdk_bdev_split.so 00:01:32.501 LIB libspdk_bdev_iscsi.a 00:01:32.501 SO libspdk_bdev_gpt.so.6.0 00:01:32.501 SYMLINK libspdk_bdev_ftl.so 00:01:32.501 LIB libspdk_bdev_malloc.a 00:01:32.501 SO libspdk_bdev_iscsi.so.6.0 00:01:32.501 SYMLINK libspdk_bdev_error.so 00:01:32.501 SYMLINK libspdk_bdev_passthru.so 00:01:32.501 SYMLINK libspdk_bdev_null.so 00:01:32.501 LIB libspdk_bdev_aio.a 00:01:32.501 SO libspdk_bdev_malloc.so.6.0 00:01:32.501 LIB libspdk_bdev_delay.a 00:01:32.501 SYMLINK libspdk_bdev_gpt.so 00:01:32.501 SO libspdk_bdev_aio.so.6.0 00:01:32.501 LIB libspdk_bdev_zone_block.a 00:01:32.501 SO libspdk_bdev_delay.so.6.0 00:01:32.501 SYMLINK libspdk_bdev_iscsi.so 00:01:32.501 SO libspdk_bdev_zone_block.so.6.0 00:01:32.501 SYMLINK libspdk_bdev_malloc.so 00:01:32.501 SYMLINK libspdk_bdev_aio.so 00:01:32.501 SYMLINK libspdk_bdev_delay.so 00:01:32.759 SYMLINK libspdk_bdev_zone_block.so 00:01:32.759 LIB libspdk_bdev_virtio.a 00:01:32.759 LIB libspdk_bdev_lvol.a 00:01:32.759 SO libspdk_bdev_virtio.so.6.0 00:01:32.759 SO libspdk_bdev_lvol.so.6.0 00:01:32.759 SYMLINK libspdk_bdev_virtio.so 00:01:32.759 SYMLINK libspdk_bdev_lvol.so 00:01:33.018 LIB libspdk_bdev_raid.a 00:01:33.018 SO libspdk_bdev_raid.so.6.0 00:01:33.018 SYMLINK libspdk_bdev_raid.so 00:01:34.417 LIB libspdk_bdev_nvme.a 00:01:34.417 SO libspdk_bdev_nvme.so.7.0 00:01:34.417 SYMLINK libspdk_bdev_nvme.so 00:01:34.675 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:34.675 CC module/event/subsystems/vmd/vmd.o 00:01:34.675 CC module/event/subsystems/sock/sock.o 00:01:34.675 CC module/event/subsystems/iobuf/iobuf.o 00:01:34.675 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:34.675 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:34.675 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:34.675 CC module/event/subsystems/keyring/keyring.o 00:01:34.675 CC module/event/subsystems/scheduler/scheduler.o 00:01:34.933 LIB libspdk_event_keyring.a 00:01:34.933 LIB libspdk_event_vhost_blk.a 00:01:34.933 LIB libspdk_event_vfu_tgt.a 00:01:34.933 LIB libspdk_event_scheduler.a 00:01:34.933 LIB libspdk_event_vmd.a 00:01:34.933 LIB libspdk_event_sock.a 00:01:34.933 LIB libspdk_event_iobuf.a 00:01:34.933 SO libspdk_event_vhost_blk.so.3.0 00:01:34.933 SO libspdk_event_keyring.so.1.0 00:01:34.933 SO libspdk_event_vfu_tgt.so.3.0 00:01:34.933 SO libspdk_event_scheduler.so.4.0 00:01:34.933 SO libspdk_event_sock.so.5.0 00:01:34.933 SO libspdk_event_vmd.so.6.0 00:01:34.933 SO libspdk_event_iobuf.so.3.0 00:01:34.933 SYMLINK libspdk_event_vhost_blk.so 00:01:34.933 SYMLINK libspdk_event_keyring.so 00:01:34.933 SYMLINK libspdk_event_vfu_tgt.so 00:01:34.933 SYMLINK libspdk_event_sock.so 00:01:34.933 SYMLINK libspdk_event_scheduler.so 00:01:34.933 SYMLINK libspdk_event_vmd.so 00:01:34.933 SYMLINK libspdk_event_iobuf.so 00:01:35.190 CC module/event/subsystems/accel/accel.o 00:01:35.447 LIB libspdk_event_accel.a 00:01:35.447 SO libspdk_event_accel.so.6.0 00:01:35.447 SYMLINK libspdk_event_accel.so 00:01:35.705 CC module/event/subsystems/bdev/bdev.o 00:01:35.705 LIB libspdk_event_bdev.a 00:01:35.705 SO libspdk_event_bdev.so.6.0 00:01:35.963 SYMLINK libspdk_event_bdev.so 00:01:35.963 CC module/event/subsystems/ublk/ublk.o 00:01:35.963 CC module/event/subsystems/nbd/nbd.o 00:01:35.963 CC module/event/subsystems/scsi/scsi.o 00:01:35.963 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:35.963 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:36.220 LIB libspdk_event_ublk.a 00:01:36.220 LIB libspdk_event_nbd.a 00:01:36.220 LIB libspdk_event_scsi.a 00:01:36.220 SO libspdk_event_nbd.so.6.0 00:01:36.220 SO libspdk_event_ublk.so.3.0 00:01:36.220 SO libspdk_event_scsi.so.6.0 00:01:36.220 SYMLINK libspdk_event_nbd.so 00:01:36.220 SYMLINK libspdk_event_ublk.so 00:01:36.220 SYMLINK libspdk_event_scsi.so 00:01:36.220 LIB libspdk_event_nvmf.a 00:01:36.220 SO libspdk_event_nvmf.so.6.0 00:01:36.478 SYMLINK libspdk_event_nvmf.so 00:01:36.478 CC module/event/subsystems/iscsi/iscsi.o 00:01:36.478 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:36.478 LIB libspdk_event_vhost_scsi.a 00:01:36.478 LIB libspdk_event_iscsi.a 00:01:36.736 SO libspdk_event_vhost_scsi.so.3.0 00:01:36.736 SO libspdk_event_iscsi.so.6.0 00:01:36.736 SYMLINK libspdk_event_vhost_scsi.so 00:01:36.736 SYMLINK libspdk_event_iscsi.so 00:01:36.736 SO libspdk.so.6.0 00:01:36.736 SYMLINK libspdk.so 00:01:36.997 CC app/trace_record/trace_record.o 00:01:36.997 TEST_HEADER include/spdk/accel.h 00:01:36.997 TEST_HEADER include/spdk/assert.h 00:01:36.997 TEST_HEADER include/spdk/accel_module.h 00:01:36.997 CXX app/trace/trace.o 00:01:36.997 CC app/spdk_top/spdk_top.o 00:01:36.997 TEST_HEADER include/spdk/barrier.h 00:01:36.997 TEST_HEADER include/spdk/bdev.h 00:01:36.997 TEST_HEADER include/spdk/base64.h 00:01:36.997 CC app/spdk_lspci/spdk_lspci.o 00:01:36.997 TEST_HEADER include/spdk/bdev_module.h 00:01:36.997 TEST_HEADER include/spdk/bdev_zone.h 00:01:36.997 TEST_HEADER include/spdk/bit_array.h 00:01:36.997 CC app/spdk_nvme_identify/identify.o 00:01:36.997 TEST_HEADER include/spdk/bit_pool.h 00:01:36.997 TEST_HEADER include/spdk/blob_bdev.h 00:01:36.997 CC app/spdk_nvme_discover/discovery_aer.o 00:01:36.997 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:36.997 TEST_HEADER include/spdk/blobfs.h 00:01:36.997 CC app/spdk_nvme_perf/perf.o 00:01:36.997 TEST_HEADER include/spdk/blob.h 00:01:36.997 CC test/rpc_client/rpc_client_test.o 00:01:36.997 TEST_HEADER include/spdk/conf.h 00:01:36.997 TEST_HEADER include/spdk/config.h 00:01:36.997 TEST_HEADER include/spdk/cpuset.h 00:01:36.997 TEST_HEADER include/spdk/crc16.h 00:01:36.997 TEST_HEADER include/spdk/crc32.h 00:01:36.997 TEST_HEADER include/spdk/crc64.h 00:01:36.997 TEST_HEADER include/spdk/dif.h 00:01:36.997 TEST_HEADER include/spdk/dma.h 00:01:36.997 TEST_HEADER include/spdk/endian.h 00:01:36.997 TEST_HEADER include/spdk/env_dpdk.h 00:01:36.997 TEST_HEADER include/spdk/env.h 00:01:36.997 TEST_HEADER include/spdk/event.h 00:01:36.997 TEST_HEADER include/spdk/fd.h 00:01:36.997 TEST_HEADER include/spdk/fd_group.h 00:01:36.997 TEST_HEADER include/spdk/file.h 00:01:36.997 TEST_HEADER include/spdk/ftl.h 00:01:36.997 TEST_HEADER include/spdk/gpt_spec.h 00:01:36.997 TEST_HEADER include/spdk/hexlify.h 00:01:36.997 TEST_HEADER include/spdk/histogram_data.h 00:01:36.997 TEST_HEADER include/spdk/idxd.h 00:01:36.997 TEST_HEADER include/spdk/idxd_spec.h 00:01:36.997 TEST_HEADER include/spdk/init.h 00:01:36.997 TEST_HEADER include/spdk/ioat.h 00:01:36.997 TEST_HEADER include/spdk/ioat_spec.h 00:01:36.997 TEST_HEADER include/spdk/iscsi_spec.h 00:01:36.997 TEST_HEADER include/spdk/json.h 00:01:36.997 TEST_HEADER include/spdk/jsonrpc.h 00:01:36.997 TEST_HEADER include/spdk/keyring.h 00:01:36.997 TEST_HEADER include/spdk/keyring_module.h 00:01:36.997 TEST_HEADER include/spdk/log.h 00:01:36.997 TEST_HEADER include/spdk/likely.h 00:01:36.997 TEST_HEADER include/spdk/memory.h 00:01:36.997 TEST_HEADER include/spdk/lvol.h 00:01:36.997 TEST_HEADER include/spdk/mmio.h 00:01:36.997 TEST_HEADER include/spdk/nbd.h 00:01:36.997 TEST_HEADER include/spdk/notify.h 00:01:36.997 TEST_HEADER include/spdk/nvme.h 00:01:36.997 TEST_HEADER include/spdk/nvme_intel.h 00:01:36.997 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:36.997 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:36.997 TEST_HEADER include/spdk/nvme_spec.h 00:01:36.997 TEST_HEADER include/spdk/nvme_zns.h 00:01:36.997 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:36.997 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:36.997 TEST_HEADER include/spdk/nvmf.h 00:01:36.997 TEST_HEADER include/spdk/nvmf_spec.h 00:01:36.997 TEST_HEADER include/spdk/nvmf_transport.h 00:01:36.997 TEST_HEADER include/spdk/opal.h 00:01:36.997 TEST_HEADER include/spdk/opal_spec.h 00:01:36.997 TEST_HEADER include/spdk/pci_ids.h 00:01:36.997 TEST_HEADER include/spdk/pipe.h 00:01:36.997 TEST_HEADER include/spdk/queue.h 00:01:36.997 TEST_HEADER include/spdk/reduce.h 00:01:36.997 TEST_HEADER include/spdk/rpc.h 00:01:36.997 TEST_HEADER include/spdk/scheduler.h 00:01:36.997 TEST_HEADER include/spdk/scsi.h 00:01:36.997 TEST_HEADER include/spdk/scsi_spec.h 00:01:36.997 TEST_HEADER include/spdk/sock.h 00:01:36.997 TEST_HEADER include/spdk/string.h 00:01:36.997 TEST_HEADER include/spdk/stdinc.h 00:01:36.997 TEST_HEADER include/spdk/thread.h 00:01:36.997 TEST_HEADER include/spdk/trace.h 00:01:36.997 TEST_HEADER include/spdk/trace_parser.h 00:01:36.997 TEST_HEADER include/spdk/ublk.h 00:01:36.997 TEST_HEADER include/spdk/tree.h 00:01:36.997 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:36.997 TEST_HEADER include/spdk/util.h 00:01:36.997 TEST_HEADER include/spdk/uuid.h 00:01:36.997 TEST_HEADER include/spdk/version.h 00:01:36.997 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:36.997 TEST_HEADER include/spdk/vhost.h 00:01:36.997 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:36.997 TEST_HEADER include/spdk/vmd.h 00:01:36.997 TEST_HEADER include/spdk/xor.h 00:01:36.997 TEST_HEADER include/spdk/zipf.h 00:01:36.997 CXX test/cpp_headers/accel.o 00:01:36.997 CXX test/cpp_headers/accel_module.o 00:01:36.997 CXX test/cpp_headers/assert.o 00:01:36.997 CXX test/cpp_headers/barrier.o 00:01:36.997 CXX test/cpp_headers/base64.o 00:01:36.997 CXX test/cpp_headers/bdev.o 00:01:36.997 CXX test/cpp_headers/bdev_module.o 00:01:36.997 CXX test/cpp_headers/bdev_zone.o 00:01:36.997 CXX test/cpp_headers/bit_array.o 00:01:36.997 CXX test/cpp_headers/bit_pool.o 00:01:36.997 CXX test/cpp_headers/blob_bdev.o 00:01:36.997 CXX test/cpp_headers/blobfs_bdev.o 00:01:36.997 CXX test/cpp_headers/blobfs.o 00:01:36.997 CXX test/cpp_headers/blob.o 00:01:36.997 CXX test/cpp_headers/conf.o 00:01:36.997 CXX test/cpp_headers/config.o 00:01:36.997 CXX test/cpp_headers/cpuset.o 00:01:36.997 CXX test/cpp_headers/crc16.o 00:01:36.997 CC app/spdk_dd/spdk_dd.o 00:01:36.997 CC app/nvmf_tgt/nvmf_main.o 00:01:36.997 CC app/iscsi_tgt/iscsi_tgt.o 00:01:36.997 CXX test/cpp_headers/crc32.o 00:01:36.997 CC app/spdk_tgt/spdk_tgt.o 00:01:36.997 CC examples/ioat/perf/perf.o 00:01:36.997 CC examples/util/zipf/zipf.o 00:01:36.998 CC test/app/stub/stub.o 00:01:37.260 CC test/env/vtophys/vtophys.o 00:01:37.260 CC test/env/memory/memory_ut.o 00:01:37.260 CC test/app/histogram_perf/histogram_perf.o 00:01:37.260 CC examples/ioat/verify/verify.o 00:01:37.260 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:37.260 CC test/env/pci/pci_ut.o 00:01:37.260 CC test/app/jsoncat/jsoncat.o 00:01:37.260 CC app/fio/nvme/fio_plugin.o 00:01:37.260 CC test/thread/poller_perf/poller_perf.o 00:01:37.260 CC test/dma/test_dma/test_dma.o 00:01:37.260 CC test/app/bdev_svc/bdev_svc.o 00:01:37.260 CC app/fio/bdev/fio_plugin.o 00:01:37.260 LINK spdk_lspci 00:01:37.260 CC test/env/mem_callbacks/mem_callbacks.o 00:01:37.260 LINK rpc_client_test 00:01:37.260 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:37.530 LINK spdk_nvme_discover 00:01:37.530 LINK interrupt_tgt 00:01:37.530 LINK histogram_perf 00:01:37.530 LINK jsoncat 00:01:37.530 CXX test/cpp_headers/crc64.o 00:01:37.530 LINK poller_perf 00:01:37.530 LINK vtophys 00:01:37.530 LINK zipf 00:01:37.530 CXX test/cpp_headers/dif.o 00:01:37.530 CXX test/cpp_headers/dma.o 00:01:37.530 LINK env_dpdk_post_init 00:01:37.530 CXX test/cpp_headers/endian.o 00:01:37.530 CXX test/cpp_headers/env_dpdk.o 00:01:37.530 LINK spdk_trace_record 00:01:37.530 CXX test/cpp_headers/env.o 00:01:37.530 LINK stub 00:01:37.530 CXX test/cpp_headers/event.o 00:01:37.530 LINK nvmf_tgt 00:01:37.530 CXX test/cpp_headers/fd_group.o 00:01:37.530 CXX test/cpp_headers/fd.o 00:01:37.530 CXX test/cpp_headers/file.o 00:01:37.530 CXX test/cpp_headers/ftl.o 00:01:37.530 CXX test/cpp_headers/gpt_spec.o 00:01:37.530 CXX test/cpp_headers/hexlify.o 00:01:37.530 LINK iscsi_tgt 00:01:37.530 CXX test/cpp_headers/histogram_data.o 00:01:37.530 CXX test/cpp_headers/idxd.o 00:01:37.530 LINK ioat_perf 00:01:37.530 CXX test/cpp_headers/idxd_spec.o 00:01:37.530 LINK spdk_tgt 00:01:37.530 LINK bdev_svc 00:01:37.530 LINK verify 00:01:37.530 CXX test/cpp_headers/init.o 00:01:37.530 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:37.819 CXX test/cpp_headers/ioat.o 00:01:37.819 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:37.819 CXX test/cpp_headers/ioat_spec.o 00:01:37.819 CXX test/cpp_headers/iscsi_spec.o 00:01:37.819 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:37.819 CXX test/cpp_headers/json.o 00:01:37.819 CXX test/cpp_headers/jsonrpc.o 00:01:37.819 LINK spdk_dd 00:01:37.819 CXX test/cpp_headers/keyring.o 00:01:37.819 CXX test/cpp_headers/keyring_module.o 00:01:37.819 CXX test/cpp_headers/likely.o 00:01:37.819 LINK spdk_trace 00:01:37.819 CXX test/cpp_headers/log.o 00:01:37.819 CXX test/cpp_headers/lvol.o 00:01:37.819 CXX test/cpp_headers/memory.o 00:01:37.819 CXX test/cpp_headers/mmio.o 00:01:37.819 CXX test/cpp_headers/nbd.o 00:01:37.819 CXX test/cpp_headers/notify.o 00:01:37.819 CXX test/cpp_headers/nvme.o 00:01:37.819 CXX test/cpp_headers/nvme_intel.o 00:01:37.819 CXX test/cpp_headers/nvme_ocssd.o 00:01:37.819 LINK pci_ut 00:01:37.819 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:37.819 CXX test/cpp_headers/nvme_spec.o 00:01:37.819 CXX test/cpp_headers/nvme_zns.o 00:01:38.091 CXX test/cpp_headers/nvmf_cmd.o 00:01:38.091 LINK test_dma 00:01:38.091 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:38.091 CXX test/cpp_headers/nvmf.o 00:01:38.091 CXX test/cpp_headers/nvmf_spec.o 00:01:38.091 CXX test/cpp_headers/nvmf_transport.o 00:01:38.091 CXX test/cpp_headers/opal.o 00:01:38.091 CXX test/cpp_headers/opal_spec.o 00:01:38.091 CXX test/cpp_headers/pci_ids.o 00:01:38.091 CC test/event/event_perf/event_perf.o 00:01:38.091 CC examples/thread/thread/thread_ex.o 00:01:38.091 CC test/event/reactor/reactor.o 00:01:38.091 CXX test/cpp_headers/pipe.o 00:01:38.091 CC test/event/reactor_perf/reactor_perf.o 00:01:38.091 CXX test/cpp_headers/queue.o 00:01:38.091 LINK nvme_fuzz 00:01:38.091 CXX test/cpp_headers/reduce.o 00:01:38.091 CC examples/sock/hello_world/hello_sock.o 00:01:38.091 CXX test/cpp_headers/rpc.o 00:01:38.091 CC examples/vmd/lsvmd/lsvmd.o 00:01:38.091 CC examples/idxd/perf/perf.o 00:01:38.091 CXX test/cpp_headers/scheduler.o 00:01:38.362 CXX test/cpp_headers/scsi.o 00:01:38.362 LINK spdk_nvme 00:01:38.362 CC test/event/app_repeat/app_repeat.o 00:01:38.362 LINK spdk_bdev 00:01:38.362 CXX test/cpp_headers/scsi_spec.o 00:01:38.362 CXX test/cpp_headers/sock.o 00:01:38.362 CXX test/cpp_headers/stdinc.o 00:01:38.362 CXX test/cpp_headers/string.o 00:01:38.362 CC examples/vmd/led/led.o 00:01:38.362 CXX test/cpp_headers/thread.o 00:01:38.362 CXX test/cpp_headers/trace.o 00:01:38.362 CXX test/cpp_headers/trace_parser.o 00:01:38.362 CXX test/cpp_headers/tree.o 00:01:38.362 CC test/event/scheduler/scheduler.o 00:01:38.362 CXX test/cpp_headers/ublk.o 00:01:38.362 CXX test/cpp_headers/util.o 00:01:38.362 CXX test/cpp_headers/uuid.o 00:01:38.362 CXX test/cpp_headers/version.o 00:01:38.362 CXX test/cpp_headers/vfio_user_pci.o 00:01:38.362 CXX test/cpp_headers/vfio_user_spec.o 00:01:38.362 CXX test/cpp_headers/vhost.o 00:01:38.362 LINK reactor 00:01:38.362 CXX test/cpp_headers/vmd.o 00:01:38.362 LINK event_perf 00:01:38.362 CC app/vhost/vhost.o 00:01:38.362 LINK reactor_perf 00:01:38.362 CXX test/cpp_headers/xor.o 00:01:38.362 CXX test/cpp_headers/zipf.o 00:01:38.621 LINK mem_callbacks 00:01:38.621 LINK lsvmd 00:01:38.621 LINK spdk_nvme_identify 00:01:38.621 LINK app_repeat 00:01:38.621 LINK spdk_nvme_perf 00:01:38.621 LINK vhost_fuzz 00:01:38.621 LINK led 00:01:38.621 LINK hello_sock 00:01:38.621 LINK spdk_top 00:01:38.621 LINK thread 00:01:38.621 CC test/nvme/err_injection/err_injection.o 00:01:38.621 CC test/nvme/reset/reset.o 00:01:38.621 CC test/nvme/startup/startup.o 00:01:38.621 CC test/nvme/reserve/reserve.o 00:01:38.621 CC test/nvme/sgl/sgl.o 00:01:38.621 CC test/nvme/overhead/overhead.o 00:01:38.621 CC test/nvme/aer/aer.o 00:01:38.621 CC test/nvme/simple_copy/simple_copy.o 00:01:38.621 CC test/nvme/e2edp/nvme_dp.o 00:01:38.880 CC test/nvme/connect_stress/connect_stress.o 00:01:38.880 CC test/blobfs/mkfs/mkfs.o 00:01:38.880 CC test/accel/dif/dif.o 00:01:38.880 CC test/nvme/boot_partition/boot_partition.o 00:01:38.880 CC test/nvme/compliance/nvme_compliance.o 00:01:38.880 CC test/nvme/fused_ordering/fused_ordering.o 00:01:38.880 CC test/nvme/fdp/fdp.o 00:01:38.880 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:38.880 CC test/lvol/esnap/esnap.o 00:01:38.880 CC test/nvme/cuse/cuse.o 00:01:38.880 LINK scheduler 00:01:38.880 LINK idxd_perf 00:01:38.880 LINK vhost 00:01:38.880 LINK err_injection 00:01:38.880 LINK reserve 00:01:38.880 LINK mkfs 00:01:39.143 LINK boot_partition 00:01:39.143 LINK simple_copy 00:01:39.143 LINK startup 00:01:39.143 LINK doorbell_aers 00:01:39.143 LINK overhead 00:01:39.143 LINK reset 00:01:39.143 LINK sgl 00:01:39.143 LINK connect_stress 00:01:39.143 LINK nvme_dp 00:01:39.143 LINK aer 00:01:39.143 LINK memory_ut 00:01:39.143 LINK fused_ordering 00:01:39.143 CC examples/nvme/reconnect/reconnect.o 00:01:39.143 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:39.143 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:39.143 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:39.143 CC examples/nvme/hello_world/hello_world.o 00:01:39.143 CC examples/nvme/arbitration/arbitration.o 00:01:39.143 CC examples/nvme/hotplug/hotplug.o 00:01:39.143 CC examples/nvme/abort/abort.o 00:01:39.143 LINK nvme_compliance 00:01:39.143 CC examples/accel/perf/accel_perf.o 00:01:39.143 CC examples/blob/hello_world/hello_blob.o 00:01:39.401 CC examples/blob/cli/blobcli.o 00:01:39.401 LINK fdp 00:01:39.401 LINK cmb_copy 00:01:39.401 LINK dif 00:01:39.401 LINK pmr_persistence 00:01:39.401 LINK hotplug 00:01:39.401 LINK hello_world 00:01:39.659 LINK hello_blob 00:01:39.659 LINK reconnect 00:01:39.659 LINK arbitration 00:01:39.659 LINK abort 00:01:39.659 LINK nvme_manage 00:01:39.659 LINK accel_perf 00:01:39.917 LINK blobcli 00:01:39.917 CC test/bdev/bdevio/bdevio.o 00:01:39.917 LINK iscsi_fuzz 00:01:40.175 CC examples/bdev/hello_world/hello_bdev.o 00:01:40.175 CC examples/bdev/bdevperf/bdevperf.o 00:01:40.175 LINK bdevio 00:01:40.432 LINK cuse 00:01:40.432 LINK hello_bdev 00:01:41.015 LINK bdevperf 00:01:41.271 CC examples/nvmf/nvmf/nvmf.o 00:01:41.528 LINK nvmf 00:01:44.058 LINK esnap 00:01:44.058 00:01:44.058 real 0m48.882s 00:01:44.058 user 10m5.867s 00:01:44.058 sys 2m27.976s 00:01:44.058 22:25:27 make -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:01:44.058 22:25:27 make -- common/autotest_common.sh@10 -- $ set +x 00:01:44.058 ************************************ 00:01:44.058 END TEST make 00:01:44.058 ************************************ 00:01:44.316 22:25:27 -- common/autotest_common.sh@1136 -- $ return 0 00:01:44.316 22:25:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:44.316 22:25:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:44.316 22:25:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:44.316 22:25:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.316 22:25:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:44.316 22:25:27 -- pm/common@44 -- $ pid=1046989 00:01:44.316 22:25:27 -- pm/common@50 -- $ kill -TERM 1046989 00:01:44.316 22:25:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.316 22:25:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:44.316 22:25:27 -- pm/common@44 -- $ pid=1046991 00:01:44.316 22:25:27 -- pm/common@50 -- $ kill -TERM 1046991 00:01:44.316 22:25:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.316 22:25:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:44.316 22:25:27 -- pm/common@44 -- $ pid=1046993 00:01:44.316 22:25:27 -- pm/common@50 -- $ kill -TERM 1046993 00:01:44.316 22:25:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.316 22:25:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:44.316 22:25:27 -- pm/common@44 -- $ pid=1047020 00:01:44.316 22:25:27 -- pm/common@50 -- $ sudo -E kill -TERM 1047020 00:01:44.316 22:25:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:44.316 22:25:27 -- nvmf/common.sh@7 -- # uname -s 00:01:44.316 22:25:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:44.316 22:25:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:44.316 22:25:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:44.316 22:25:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:44.316 22:25:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:44.316 22:25:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:44.316 22:25:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:44.316 22:25:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:44.316 22:25:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:44.316 22:25:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:44.316 22:25:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:44.316 22:25:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:44.316 22:25:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:44.316 22:25:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:44.316 22:25:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:44.316 22:25:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:44.316 22:25:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:44.316 22:25:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:44.316 22:25:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.316 22:25:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.316 22:25:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.316 22:25:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.316 22:25:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.316 22:25:27 -- paths/export.sh@5 -- # export PATH 00:01:44.316 22:25:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.316 22:25:27 -- nvmf/common.sh@47 -- # : 0 00:01:44.316 22:25:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:44.316 22:25:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:44.316 22:25:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:44.316 22:25:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:44.316 22:25:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:44.316 22:25:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:44.316 22:25:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:44.316 22:25:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:44.316 22:25:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:44.316 22:25:27 -- spdk/autotest.sh@32 -- # uname -s 00:01:44.316 22:25:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:44.316 22:25:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:44.316 22:25:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.316 22:25:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:44.316 22:25:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.316 22:25:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:44.316 22:25:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:44.316 22:25:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:44.316 22:25:27 -- spdk/autotest.sh@48 -- # udevadm_pid=1102373 00:01:44.316 22:25:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:44.316 22:25:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:44.316 22:25:27 -- pm/common@17 -- # local monitor 00:01:44.316 22:25:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.316 22:25:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.316 22:25:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.316 22:25:27 -- pm/common@21 -- # date +%s 00:01:44.316 22:25:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.316 22:25:27 -- pm/common@21 -- # date +%s 00:01:44.316 22:25:27 -- pm/common@25 -- # sleep 1 00:01:44.316 22:25:27 -- pm/common@21 -- # date +%s 00:01:44.316 22:25:27 -- pm/common@21 -- # date +%s 00:01:44.316 22:25:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721075127 00:01:44.316 22:25:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721075127 00:01:44.316 22:25:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721075127 00:01:44.316 22:25:27 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721075127 00:01:44.316 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721075127_collect-vmstat.pm.log 00:01:44.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721075127_collect-cpu-load.pm.log 00:01:44.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721075127_collect-cpu-temp.pm.log 00:01:44.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721075127_collect-bmc-pm.bmc.pm.log 00:01:45.276 22:25:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:45.276 22:25:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:45.276 22:25:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:01:45.276 22:25:28 -- common/autotest_common.sh@10 -- # set +x 00:01:45.276 22:25:28 -- spdk/autotest.sh@59 -- # create_test_list 00:01:45.276 22:25:28 -- common/autotest_common.sh@740 -- # xtrace_disable 00:01:45.276 22:25:28 -- common/autotest_common.sh@10 -- # set +x 00:01:45.276 22:25:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:45.276 22:25:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.276 22:25:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.276 22:25:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:45.276 22:25:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.276 22:25:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:45.276 22:25:28 -- common/autotest_common.sh@1449 -- # uname 00:01:45.276 22:25:28 -- common/autotest_common.sh@1449 -- # '[' Linux = FreeBSD ']' 00:01:45.276 22:25:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:45.276 22:25:28 -- common/autotest_common.sh@1469 -- # uname 00:01:45.276 22:25:28 -- common/autotest_common.sh@1469 -- # [[ Linux = FreeBSD ]] 00:01:45.276 22:25:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:45.276 22:25:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:45.276 22:25:28 -- spdk/autotest.sh@72 -- # hash lcov 00:01:45.276 22:25:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:45.276 22:25:28 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:45.276 --rc lcov_branch_coverage=1 00:01:45.277 --rc lcov_function_coverage=1 00:01:45.277 --rc genhtml_branch_coverage=1 00:01:45.277 --rc genhtml_function_coverage=1 00:01:45.277 --rc genhtml_legend=1 00:01:45.277 --rc geninfo_all_blocks=1 00:01:45.277 ' 00:01:45.277 22:25:28 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:45.277 --rc lcov_branch_coverage=1 00:01:45.277 --rc lcov_function_coverage=1 00:01:45.277 --rc genhtml_branch_coverage=1 00:01:45.277 --rc genhtml_function_coverage=1 00:01:45.277 --rc genhtml_legend=1 00:01:45.277 --rc geninfo_all_blocks=1 00:01:45.277 ' 00:01:45.277 22:25:28 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:45.277 --rc lcov_branch_coverage=1 00:01:45.277 --rc lcov_function_coverage=1 00:01:45.277 --rc genhtml_branch_coverage=1 00:01:45.277 --rc genhtml_function_coverage=1 00:01:45.277 --rc genhtml_legend=1 00:01:45.277 --rc geninfo_all_blocks=1 00:01:45.277 --no-external' 00:01:45.277 22:25:28 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:45.277 --rc lcov_branch_coverage=1 00:01:45.277 --rc lcov_function_coverage=1 00:01:45.277 --rc genhtml_branch_coverage=1 00:01:45.277 --rc genhtml_function_coverage=1 00:01:45.277 --rc genhtml_legend=1 00:01:45.277 --rc geninfo_all_blocks=1 00:01:45.277 --no-external' 00:01:45.277 22:25:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:45.535 lcov: LCOV version 1.14 00:01:45.535 22:25:28 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:46.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:46.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:46.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:46.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:46.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:46.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:46.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:46.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:46.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:47.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:47.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:47.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:47.429 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:47.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:47.429 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:47.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:47.429 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:47.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:47.429 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:47.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:47.429 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:47.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:47.429 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:47.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:47.429 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:01:47.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:47.429 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:02.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:02.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:20.425 22:26:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:20.425 22:26:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:02:20.425 22:26:01 -- common/autotest_common.sh@10 -- # set +x 00:02:20.425 22:26:01 -- spdk/autotest.sh@91 -- # rm -f 00:02:20.425 22:26:01 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:20.425 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:20.425 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:20.425 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:20.425 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:20.425 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:20.425 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:20.425 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:20.425 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:20.425 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:20.425 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:20.425 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:20.425 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:20.425 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:20.425 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:20.425 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:20.425 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:20.425 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:20.425 22:26:02 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:20.425 22:26:02 -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:02:20.425 22:26:02 -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:02:20.425 22:26:02 -- common/autotest_common.sh@1664 -- # local nvme bdf 00:02:20.425 22:26:02 -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:02:20.425 22:26:02 -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:02:20.425 22:26:02 -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:02:20.425 22:26:02 -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:20.425 22:26:02 -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:02:20.425 22:26:02 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:20.425 22:26:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:20.425 22:26:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:20.425 22:26:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:20.425 22:26:02 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:20.425 22:26:02 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:20.425 No valid GPT data, bailing 00:02:20.425 22:26:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:20.425 22:26:02 -- scripts/common.sh@391 -- # pt= 00:02:20.425 22:26:02 -- scripts/common.sh@392 -- # return 1 00:02:20.425 22:26:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:20.425 1+0 records in 00:02:20.425 1+0 records out 00:02:20.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00145509 s, 721 MB/s 00:02:20.425 22:26:02 -- spdk/autotest.sh@118 -- # sync 00:02:20.425 22:26:02 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:20.425 22:26:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:20.425 22:26:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:20.989 22:26:04 -- spdk/autotest.sh@124 -- # uname -s 00:02:20.989 22:26:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:20.989 22:26:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:20.989 22:26:04 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:20.989 22:26:04 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:20.989 22:26:04 -- common/autotest_common.sh@10 -- # set +x 00:02:20.989 ************************************ 00:02:20.989 START TEST setup.sh 00:02:20.989 ************************************ 00:02:20.989 22:26:04 setup.sh -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:21.246 * Looking for test storage... 00:02:21.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:21.246 22:26:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:21.246 22:26:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:21.246 22:26:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:21.246 22:26:04 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:21.246 22:26:04 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:21.246 22:26:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:21.246 ************************************ 00:02:21.246 START TEST acl 00:02:21.246 ************************************ 00:02:21.246 22:26:04 setup.sh.acl -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:21.246 * Looking for test storage... 00:02:21.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:21.246 22:26:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:21.246 22:26:04 setup.sh.acl -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:02:21.246 22:26:04 setup.sh.acl -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:02:21.246 22:26:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # local nvme bdf 00:02:21.246 22:26:04 setup.sh.acl -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:02:21.246 22:26:04 setup.sh.acl -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:02:21.246 22:26:04 setup.sh.acl -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:02:21.246 22:26:04 setup.sh.acl -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:21.246 22:26:04 setup.sh.acl -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:02:21.246 22:26:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:21.246 22:26:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:21.246 22:26:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:21.246 22:26:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:21.246 22:26:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:21.246 22:26:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:21.246 22:26:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:22.615 22:26:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:22.615 22:26:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:22.616 22:26:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.616 22:26:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:22.616 22:26:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:22.616 22:26:06 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:23.989 Hugepages 00:02:23.989 node hugesize free / total 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 00:02:23.989 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:23.989 22:26:07 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:23.989 22:26:07 setup.sh.acl -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:23.989 22:26:07 setup.sh.acl -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:23.989 22:26:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:23.989 ************************************ 00:02:23.989 START TEST denied 00:02:23.989 ************************************ 00:02:23.989 22:26:07 setup.sh.acl.denied -- common/autotest_common.sh@1117 -- # denied 00:02:23.989 22:26:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:23.989 22:26:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:23.989 22:26:07 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:23.989 22:26:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:23.989 22:26:07 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:25.387 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:25.387 22:26:08 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:25.387 22:26:08 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:25.387 22:26:08 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:25.387 22:26:08 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:25.387 22:26:08 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:25.387 22:26:08 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:25.387 22:26:08 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:25.387 22:26:08 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:25.387 22:26:08 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:25.387 22:26:08 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:27.921 00:02:27.921 real 0m3.883s 00:02:27.921 user 0m1.115s 00:02:27.921 sys 0m1.843s 00:02:27.921 22:26:11 setup.sh.acl.denied -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:27.921 22:26:11 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:27.921 ************************************ 00:02:27.921 END TEST denied 00:02:27.921 ************************************ 00:02:27.921 22:26:11 setup.sh.acl -- common/autotest_common.sh@1136 -- # return 0 00:02:27.921 22:26:11 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:27.921 22:26:11 setup.sh.acl -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:27.921 22:26:11 setup.sh.acl -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:27.921 22:26:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:27.921 ************************************ 00:02:27.921 START TEST allowed 00:02:27.921 ************************************ 00:02:27.921 22:26:11 setup.sh.acl.allowed -- common/autotest_common.sh@1117 -- # allowed 00:02:27.921 22:26:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:27.921 22:26:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:27.921 22:26:11 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:27.921 22:26:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:27.921 22:26:11 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:30.459 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:30.459 22:26:13 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:30.459 22:26:13 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:30.459 22:26:13 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:30.459 22:26:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:30.459 22:26:13 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:31.839 00:02:31.839 real 0m3.952s 00:02:31.839 user 0m1.072s 00:02:31.839 sys 0m1.704s 00:02:31.839 22:26:15 setup.sh.acl.allowed -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:31.839 22:26:15 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:31.839 ************************************ 00:02:31.839 END TEST allowed 00:02:31.839 ************************************ 00:02:31.839 22:26:15 setup.sh.acl -- common/autotest_common.sh@1136 -- # return 0 00:02:31.839 00:02:31.839 real 0m10.596s 00:02:31.839 user 0m3.247s 00:02:31.839 sys 0m5.304s 00:02:31.839 22:26:15 setup.sh.acl -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:31.839 22:26:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:31.839 ************************************ 00:02:31.839 END TEST acl 00:02:31.839 ************************************ 00:02:31.839 22:26:15 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:02:31.839 22:26:15 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:31.839 22:26:15 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:31.839 22:26:15 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:31.839 22:26:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:31.839 ************************************ 00:02:31.839 START TEST hugepages 00:02:31.839 ************************************ 00:02:31.839 22:26:15 setup.sh.hugepages -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:31.839 * Looking for test storage... 00:02:31.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43679348 kB' 'MemAvailable: 47184936 kB' 'Buffers: 2704 kB' 'Cached: 10284200 kB' 'SwapCached: 0 kB' 'Active: 7280416 kB' 'Inactive: 3508668 kB' 'Active(anon): 6884928 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505648 kB' 'Mapped: 219292 kB' 'Shmem: 6382748 kB' 'KReclaimable: 190540 kB' 'Slab: 570760 kB' 'SReclaimable: 190540 kB' 'SUnreclaim: 380220 kB' 'KernelStack: 12896 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 8004828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.839 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.840 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:02:31.841 22:26:15 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:02:31.841 22:26:15 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:31.841 22:26:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:31.841 22:26:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:31.841 ************************************ 00:02:31.841 START TEST single_node_setup 00:02:31.841 ************************************ 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1117 -- # single_node_setup 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.841 22:26:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:33.221 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:33.221 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:33.221 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:33.221 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:33.221 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:33.221 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:33.221 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:33.221 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:33.221 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:33.221 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:33.221 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:33.221 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:33.221 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:33.221 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:33.221 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:33.221 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:34.160 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:34.160 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:02:34.160 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:02:34.160 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45781388 kB' 'MemAvailable: 49286976 kB' 'Buffers: 2704 kB' 'Cached: 10284296 kB' 'SwapCached: 0 kB' 'Active: 7297432 kB' 'Inactive: 3508668 kB' 'Active(anon): 6901944 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522344 kB' 'Mapped: 219460 kB' 'Shmem: 6382844 kB' 'KReclaimable: 190540 kB' 'Slab: 570476 kB' 'SReclaimable: 190540 kB' 'SUnreclaim: 379936 kB' 'KernelStack: 12800 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8025580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.161 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.426 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45784644 kB' 'MemAvailable: 49290232 kB' 'Buffers: 2704 kB' 'Cached: 10284296 kB' 'SwapCached: 0 kB' 'Active: 7298016 kB' 'Inactive: 3508668 kB' 'Active(anon): 6902528 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522896 kB' 'Mapped: 219396 kB' 'Shmem: 6382844 kB' 'KReclaimable: 190540 kB' 'Slab: 570452 kB' 'SReclaimable: 190540 kB' 'SUnreclaim: 379912 kB' 'KernelStack: 12816 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8025604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.427 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.428 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45784976 kB' 'MemAvailable: 49290564 kB' 'Buffers: 2704 kB' 'Cached: 10284308 kB' 'SwapCached: 0 kB' 'Active: 7297452 kB' 'Inactive: 3508668 kB' 'Active(anon): 6901964 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522344 kB' 'Mapped: 219316 kB' 'Shmem: 6382856 kB' 'KReclaimable: 190540 kB' 'Slab: 570456 kB' 'SReclaimable: 190540 kB' 'SUnreclaim: 379916 kB' 'KernelStack: 12832 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8025996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.429 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.430 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:02:34.431 nr_hugepages=1024 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:34.431 resv_hugepages=0 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:34.431 surplus_hugepages=0 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:34.431 anon_hugepages=0 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45785288 kB' 'MemAvailable: 49290876 kB' 'Buffers: 2704 kB' 'Cached: 10284344 kB' 'SwapCached: 0 kB' 'Active: 7297600 kB' 'Inactive: 3508668 kB' 'Active(anon): 6902112 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522480 kB' 'Mapped: 219316 kB' 'Shmem: 6382892 kB' 'KReclaimable: 190540 kB' 'Slab: 570456 kB' 'SReclaimable: 190540 kB' 'SUnreclaim: 379916 kB' 'KernelStack: 12848 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8026016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.431 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.432 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21191192 kB' 'MemUsed: 11685748 kB' 'SwapCached: 0 kB' 'Active: 5174032 kB' 'Inactive: 3265576 kB' 'Active(anon): 4984564 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8134008 kB' 'Mapped: 100092 kB' 'AnonPages: 308776 kB' 'Shmem: 4678964 kB' 'KernelStack: 8168 kB' 'PageTables: 4676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121736 kB' 'Slab: 328436 kB' 'SReclaimable: 121736 kB' 'SUnreclaim: 206700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.433 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:02:34.434 node0=1024 expecting 1024 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:02:34.434 00:02:34.434 real 0m2.461s 00:02:34.434 user 0m0.707s 00:02:34.434 sys 0m0.877s 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:34.434 22:26:17 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:02:34.434 ************************************ 00:02:34.434 END TEST single_node_setup 00:02:34.434 ************************************ 00:02:34.434 22:26:17 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:02:34.434 22:26:17 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:02:34.434 22:26:17 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:34.434 22:26:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:34.434 22:26:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:34.434 ************************************ 00:02:34.434 START TEST even_2G_alloc 00:02:34.434 ************************************ 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1117 -- # even_2G_alloc 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.434 22:26:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:35.373 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.373 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:35.373 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.373 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.373 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.373 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.373 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.373 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.373 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.373 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.373 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.373 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.373 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.373 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.373 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.373 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.373 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45770136 kB' 'MemAvailable: 49275724 kB' 'Buffers: 2704 kB' 'Cached: 10284420 kB' 'SwapCached: 0 kB' 'Active: 7303876 kB' 'Inactive: 3508668 kB' 'Active(anon): 6908388 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528716 kB' 'Mapped: 219824 kB' 'Shmem: 6382968 kB' 'KReclaimable: 190540 kB' 'Slab: 570236 kB' 'SReclaimable: 190540 kB' 'SUnreclaim: 379696 kB' 'KernelStack: 12864 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8032188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196616 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.638 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.639 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45770980 kB' 'MemAvailable: 49276568 kB' 'Buffers: 2704 kB' 'Cached: 10284424 kB' 'SwapCached: 0 kB' 'Active: 7304096 kB' 'Inactive: 3508668 kB' 'Active(anon): 6908608 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528912 kB' 'Mapped: 220108 kB' 'Shmem: 6382972 kB' 'KReclaimable: 190540 kB' 'Slab: 570228 kB' 'SReclaimable: 190540 kB' 'SUnreclaim: 379688 kB' 'KernelStack: 12896 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8032208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196584 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.640 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45771816 kB' 'MemAvailable: 49277404 kB' 'Buffers: 2704 kB' 'Cached: 10284440 kB' 'SwapCached: 0 kB' 'Active: 7298392 kB' 'Inactive: 3508668 kB' 'Active(anon): 6902904 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523160 kB' 'Mapped: 219672 kB' 'Shmem: 6382988 kB' 'KReclaimable: 190540 kB' 'Slab: 570228 kB' 'SReclaimable: 190540 kB' 'SUnreclaim: 379688 kB' 'KernelStack: 12944 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8026108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.641 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.642 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:02:35.643 nr_hugepages=1024 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:35.643 resv_hugepages=0 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:35.643 surplus_hugepages=0 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:35.643 anon_hugepages=0 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45772860 kB' 'MemAvailable: 49278448 kB' 'Buffers: 2704 kB' 'Cached: 10284464 kB' 'SwapCached: 0 kB' 'Active: 7297796 kB' 'Inactive: 3508668 kB' 'Active(anon): 6902308 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522520 kB' 'Mapped: 219324 kB' 'Shmem: 6383012 kB' 'KReclaimable: 190540 kB' 'Slab: 570228 kB' 'SReclaimable: 190540 kB' 'SUnreclaim: 379688 kB' 'KernelStack: 12896 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8026132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.643 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.644 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22222544 kB' 'MemUsed: 10654396 kB' 'SwapCached: 0 kB' 'Active: 5174320 kB' 'Inactive: 3265576 kB' 'Active(anon): 4984852 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8134020 kB' 'Mapped: 100092 kB' 'AnonPages: 309032 kB' 'Shmem: 4678976 kB' 'KernelStack: 8216 kB' 'PageTables: 4664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121736 kB' 'Slab: 328312 kB' 'SReclaimable: 121736 kB' 'SUnreclaim: 206576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.645 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.952 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23552632 kB' 'MemUsed: 4112120 kB' 'SwapCached: 0 kB' 'Active: 2123416 kB' 'Inactive: 243092 kB' 'Active(anon): 1917396 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 243092 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2153188 kB' 'Mapped: 119232 kB' 'AnonPages: 213420 kB' 'Shmem: 1704076 kB' 'KernelStack: 4712 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68804 kB' 'Slab: 241916 kB' 'SReclaimable: 68804 kB' 'SUnreclaim: 173112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.953 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.954 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:02:35.955 node0=512 expecting 512 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:02:35.955 node1=512 expecting 512 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:02:35.955 00:02:35.955 real 0m1.350s 00:02:35.955 user 0m0.554s 00:02:35.955 sys 0m0.756s 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:35.955 22:26:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:35.955 ************************************ 00:02:35.955 END TEST even_2G_alloc 00:02:35.955 ************************************ 00:02:35.955 22:26:19 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:02:35.955 22:26:19 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:02:35.955 22:26:19 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:35.955 22:26:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:35.955 22:26:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:35.955 ************************************ 00:02:35.955 START TEST odd_alloc 00:02:35.955 ************************************ 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1117 -- # odd_alloc 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.955 22:26:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:36.892 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.892 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:36.892 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.892 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.892 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.892 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.892 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.892 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.892 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.892 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.892 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.892 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.892 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.892 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.892 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.892 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.892 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45788920 kB' 'MemAvailable: 49294492 kB' 'Buffers: 2704 kB' 'Cached: 10284556 kB' 'SwapCached: 0 kB' 'Active: 7298684 kB' 'Inactive: 3508668 kB' 'Active(anon): 6903196 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523308 kB' 'Mapped: 219352 kB' 'Shmem: 6383104 kB' 'KReclaimable: 190508 kB' 'Slab: 570268 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379760 kB' 'KernelStack: 12944 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8026332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.159 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45789668 kB' 'MemAvailable: 49295240 kB' 'Buffers: 2704 kB' 'Cached: 10284560 kB' 'SwapCached: 0 kB' 'Active: 7298604 kB' 'Inactive: 3508668 kB' 'Active(anon): 6903116 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523288 kB' 'Mapped: 219424 kB' 'Shmem: 6383108 kB' 'KReclaimable: 190508 kB' 'Slab: 570312 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379804 kB' 'KernelStack: 12960 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8026348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.160 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.161 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45790736 kB' 'MemAvailable: 49296308 kB' 'Buffers: 2704 kB' 'Cached: 10284576 kB' 'SwapCached: 0 kB' 'Active: 7298504 kB' 'Inactive: 3508668 kB' 'Active(anon): 6903016 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523128 kB' 'Mapped: 219336 kB' 'Shmem: 6383124 kB' 'KReclaimable: 190508 kB' 'Slab: 570280 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379772 kB' 'KernelStack: 12960 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8026372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.162 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.163 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:02:37.164 nr_hugepages=1025 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:37.164 resv_hugepages=0 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:37.164 surplus_hugepages=0 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:37.164 anon_hugepages=0 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45789980 kB' 'MemAvailable: 49295552 kB' 'Buffers: 2704 kB' 'Cached: 10284596 kB' 'SwapCached: 0 kB' 'Active: 7298552 kB' 'Inactive: 3508668 kB' 'Active(anon): 6903064 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523132 kB' 'Mapped: 219336 kB' 'Shmem: 6383144 kB' 'KReclaimable: 190508 kB' 'Slab: 570280 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379772 kB' 'KernelStack: 12960 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8026392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.164 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.165 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22229364 kB' 'MemUsed: 10647576 kB' 'SwapCached: 0 kB' 'Active: 5175188 kB' 'Inactive: 3265576 kB' 'Active(anon): 4985720 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8134124 kB' 'Mapped: 100092 kB' 'AnonPages: 309772 kB' 'Shmem: 4679080 kB' 'KernelStack: 8248 kB' 'PageTables: 4768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121704 kB' 'Slab: 328380 kB' 'SReclaimable: 121704 kB' 'SUnreclaim: 206676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.166 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23560364 kB' 'MemUsed: 4104388 kB' 'SwapCached: 0 kB' 'Active: 2123380 kB' 'Inactive: 243092 kB' 'Active(anon): 1917360 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 243092 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2153196 kB' 'Mapped: 119244 kB' 'AnonPages: 213356 kB' 'Shmem: 1704084 kB' 'KernelStack: 4712 kB' 'PageTables: 3604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68804 kB' 'Slab: 241900 kB' 'SReclaimable: 68804 kB' 'SUnreclaim: 173096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.167 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.168 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:02:37.169 node0=513 expecting 513 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:02:37.169 node1=512 expecting 512 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:37.169 00:02:37.169 real 0m1.429s 00:02:37.169 user 0m0.609s 00:02:37.169 sys 0m0.780s 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:37.169 22:26:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:37.169 ************************************ 00:02:37.169 END TEST odd_alloc 00:02:37.169 ************************************ 00:02:37.428 22:26:20 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:02:37.428 22:26:20 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:02:37.428 22:26:20 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:37.428 22:26:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:37.428 22:26:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:37.428 ************************************ 00:02:37.428 START TEST custom_alloc 00:02:37.428 ************************************ 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1117 -- # custom_alloc 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:02:37.428 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.429 22:26:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:38.364 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.364 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:38.364 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.364 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.364 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.364 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.364 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.364 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.364 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.364 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.364 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.364 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.364 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.626 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.626 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.626 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.626 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44747992 kB' 'MemAvailable: 48253564 kB' 'Buffers: 2704 kB' 'Cached: 10284692 kB' 'SwapCached: 0 kB' 'Active: 7295644 kB' 'Inactive: 3508668 kB' 'Active(anon): 6900156 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520212 kB' 'Mapped: 218484 kB' 'Shmem: 6383240 kB' 'KReclaimable: 190508 kB' 'Slab: 569952 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379444 kB' 'KernelStack: 12912 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8012656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.626 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.627 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44748248 kB' 'MemAvailable: 48253820 kB' 'Buffers: 2704 kB' 'Cached: 10284692 kB' 'SwapCached: 0 kB' 'Active: 7296552 kB' 'Inactive: 3508668 kB' 'Active(anon): 6901064 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521072 kB' 'Mapped: 218456 kB' 'Shmem: 6383240 kB' 'KReclaimable: 190508 kB' 'Slab: 569976 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379468 kB' 'KernelStack: 13200 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8013668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196884 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.628 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44746936 kB' 'MemAvailable: 48252508 kB' 'Buffers: 2704 kB' 'Cached: 10284692 kB' 'SwapCached: 0 kB' 'Active: 7297592 kB' 'Inactive: 3508668 kB' 'Active(anon): 6902104 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522124 kB' 'Mapped: 218456 kB' 'Shmem: 6383240 kB' 'KReclaimable: 190508 kB' 'Slab: 570076 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379568 kB' 'KernelStack: 13312 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8013692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196964 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.629 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.630 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:02:38.631 nr_hugepages=1536 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:38.631 resv_hugepages=0 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:38.631 surplus_hugepages=0 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:38.631 anon_hugepages=0 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44744748 kB' 'MemAvailable: 48250320 kB' 'Buffers: 2704 kB' 'Cached: 10284728 kB' 'SwapCached: 0 kB' 'Active: 7296900 kB' 'Inactive: 3508668 kB' 'Active(anon): 6901412 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521748 kB' 'Mapped: 218456 kB' 'Shmem: 6383276 kB' 'KReclaimable: 190508 kB' 'Slab: 570068 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379560 kB' 'KernelStack: 13104 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8013712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.631 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.632 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.892 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22224952 kB' 'MemUsed: 10651988 kB' 'SwapCached: 0 kB' 'Active: 5175016 kB' 'Inactive: 3265576 kB' 'Active(anon): 4985548 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8134192 kB' 'Mapped: 99380 kB' 'AnonPages: 309536 kB' 'Shmem: 4679148 kB' 'KernelStack: 8536 kB' 'PageTables: 6008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121704 kB' 'Slab: 328224 kB' 'SReclaimable: 121704 kB' 'SUnreclaim: 206520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.893 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.894 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22517064 kB' 'MemUsed: 5147688 kB' 'SwapCached: 0 kB' 'Active: 2121900 kB' 'Inactive: 243092 kB' 'Active(anon): 1915880 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 243092 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2153288 kB' 'Mapped: 119084 kB' 'AnonPages: 211756 kB' 'Shmem: 1704176 kB' 'KernelStack: 4696 kB' 'PageTables: 3448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68804 kB' 'Slab: 241844 kB' 'SReclaimable: 68804 kB' 'SUnreclaim: 173040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.895 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:02:38.896 node0=512 expecting 512 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:02:38.896 node1=1024 expecting 1024 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:38.896 00:02:38.896 real 0m1.488s 00:02:38.896 user 0m0.601s 00:02:38.896 sys 0m0.838s 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:38.896 22:26:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:38.896 ************************************ 00:02:38.896 END TEST custom_alloc 00:02:38.896 ************************************ 00:02:38.896 22:26:22 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:02:38.896 22:26:22 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:38.896 22:26:22 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:38.896 22:26:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:38.896 22:26:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:38.896 ************************************ 00:02:38.896 START TEST no_shrink_alloc 00:02:38.896 ************************************ 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1117 -- # no_shrink_alloc 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.896 22:26:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:39.829 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.829 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:39.829 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.829 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.829 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.829 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.090 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.090 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.090 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.090 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.090 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:40.090 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:40.090 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:40.090 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.090 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.090 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.090 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45817932 kB' 'MemAvailable: 49323504 kB' 'Buffers: 2704 kB' 'Cached: 10284816 kB' 'SwapCached: 0 kB' 'Active: 7295600 kB' 'Inactive: 3508668 kB' 'Active(anon): 6900112 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519968 kB' 'Mapped: 218476 kB' 'Shmem: 6383364 kB' 'KReclaimable: 190508 kB' 'Slab: 570024 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379516 kB' 'KernelStack: 12896 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8011540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.090 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.091 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45821536 kB' 'MemAvailable: 49327108 kB' 'Buffers: 2704 kB' 'Cached: 10284820 kB' 'SwapCached: 0 kB' 'Active: 7298456 kB' 'Inactive: 3508668 kB' 'Active(anon): 6902968 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522840 kB' 'Mapped: 218848 kB' 'Shmem: 6383368 kB' 'KReclaimable: 190508 kB' 'Slab: 570008 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379500 kB' 'KernelStack: 12960 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8015024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.092 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:40.093 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45816052 kB' 'MemAvailable: 49321624 kB' 'Buffers: 2704 kB' 'Cached: 10284840 kB' 'SwapCached: 0 kB' 'Active: 7301056 kB' 'Inactive: 3508668 kB' 'Active(anon): 6905568 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525428 kB' 'Mapped: 219184 kB' 'Shmem: 6383388 kB' 'KReclaimable: 190508 kB' 'Slab: 570104 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379596 kB' 'KernelStack: 12928 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8017700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196616 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.094 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.095 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.357 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:02:40.358 nr_hugepages=1024 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:40.358 resv_hugepages=0 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:40.358 surplus_hugepages=0 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:40.358 anon_hugepages=0 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45816052 kB' 'MemAvailable: 49321624 kB' 'Buffers: 2704 kB' 'Cached: 10284860 kB' 'SwapCached: 0 kB' 'Active: 7295804 kB' 'Inactive: 3508668 kB' 'Active(anon): 6900316 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520164 kB' 'Mapped: 219256 kB' 'Shmem: 6383408 kB' 'KReclaimable: 190508 kB' 'Slab: 570104 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379596 kB' 'KernelStack: 12912 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8012956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.358 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.359 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21184424 kB' 'MemUsed: 11692516 kB' 'SwapCached: 0 kB' 'Active: 5177384 kB' 'Inactive: 3265576 kB' 'Active(anon): 4987916 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8134188 kB' 'Mapped: 99800 kB' 'AnonPages: 311916 kB' 'Shmem: 4679144 kB' 'KernelStack: 8200 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121704 kB' 'Slab: 328280 kB' 'SReclaimable: 121704 kB' 'SUnreclaim: 206576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.360 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:02:40.361 node0=1024 expecting 1024 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.361 22:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:41.298 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.298 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.298 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.298 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.298 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.298 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.298 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.298 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.298 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.298 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.298 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.298 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.298 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.298 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.298 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.298 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.298 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.561 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45784324 kB' 'MemAvailable: 49289896 kB' 'Buffers: 2704 kB' 'Cached: 10284928 kB' 'SwapCached: 0 kB' 'Active: 7295380 kB' 'Inactive: 3508668 kB' 'Active(anon): 6899892 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519544 kB' 'Mapped: 218480 kB' 'Shmem: 6383476 kB' 'KReclaimable: 190508 kB' 'Slab: 570076 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379568 kB' 'KernelStack: 12928 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8011852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.561 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45784424 kB' 'MemAvailable: 49289996 kB' 'Buffers: 2704 kB' 'Cached: 10284932 kB' 'SwapCached: 0 kB' 'Active: 7295708 kB' 'Inactive: 3508668 kB' 'Active(anon): 6900220 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519880 kB' 'Mapped: 218416 kB' 'Shmem: 6383480 kB' 'KReclaimable: 190508 kB' 'Slab: 570036 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379528 kB' 'KernelStack: 12944 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8011868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.562 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.563 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45784172 kB' 'MemAvailable: 49289744 kB' 'Buffers: 2704 kB' 'Cached: 10284948 kB' 'SwapCached: 0 kB' 'Active: 7295660 kB' 'Inactive: 3508668 kB' 'Active(anon): 6900172 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519820 kB' 'Mapped: 218416 kB' 'Shmem: 6383496 kB' 'KReclaimable: 190508 kB' 'Slab: 570064 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379556 kB' 'KernelStack: 12928 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8011888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.564 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.565 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:02:41.566 nr_hugepages=1024 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:41.566 resv_hugepages=0 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:41.566 surplus_hugepages=0 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:41.566 anon_hugepages=0 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45784196 kB' 'MemAvailable: 49289768 kB' 'Buffers: 2704 kB' 'Cached: 10284980 kB' 'SwapCached: 0 kB' 'Active: 7295720 kB' 'Inactive: 3508668 kB' 'Active(anon): 6900232 kB' 'Inactive(anon): 0 kB' 'Active(file): 395488 kB' 'Inactive(file): 3508668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519892 kB' 'Mapped: 218416 kB' 'Shmem: 6383528 kB' 'KReclaimable: 190508 kB' 'Slab: 570064 kB' 'SReclaimable: 190508 kB' 'SUnreclaim: 379556 kB' 'KernelStack: 12960 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8011912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 36864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2276956 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 48234496 kB' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.566 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.567 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21151568 kB' 'MemUsed: 11725372 kB' 'SwapCached: 0 kB' 'Active: 5172940 kB' 'Inactive: 3265576 kB' 'Active(anon): 4983472 kB' 'Inactive(anon): 0 kB' 'Active(file): 189468 kB' 'Inactive(file): 3265576 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8134192 kB' 'Mapped: 99364 kB' 'AnonPages: 307432 kB' 'Shmem: 4679148 kB' 'KernelStack: 8184 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121704 kB' 'Slab: 328208 kB' 'SReclaimable: 121704 kB' 'SUnreclaim: 206504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.568 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:41.569 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:02:41.570 node0=1024 expecting 1024 00:02:41.570 22:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:02:41.570 00:02:41.570 real 0m2.748s 00:02:41.570 user 0m1.145s 00:02:41.570 sys 0m1.519s 00:02:41.570 22:26:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:41.570 22:26:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:41.570 ************************************ 00:02:41.570 END TEST no_shrink_alloc 00:02:41.570 ************************************ 00:02:41.570 22:26:24 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:02:41.570 22:26:24 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:02:41.570 22:26:24 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:02:41.570 22:26:24 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:02:41.570 22:26:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.570 22:26:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:41.570 22:26:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.570 22:26:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:41.570 22:26:25 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:02:41.570 22:26:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.570 22:26:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:41.570 22:26:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.570 22:26:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:41.570 22:26:25 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:02:41.570 22:26:25 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:02:41.570 00:02:41.570 real 0m9.824s 00:02:41.570 user 0m3.775s 00:02:41.570 sys 0m4.981s 00:02:41.570 22:26:25 setup.sh.hugepages -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:41.570 22:26:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:41.570 ************************************ 00:02:41.570 END TEST hugepages 00:02:41.570 ************************************ 00:02:41.570 22:26:25 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:02:41.570 22:26:25 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:41.570 22:26:25 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:41.570 22:26:25 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:41.570 22:26:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:41.828 ************************************ 00:02:41.828 START TEST driver 00:02:41.828 ************************************ 00:02:41.828 22:26:25 setup.sh.driver -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:41.828 * Looking for test storage... 00:02:41.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:41.828 22:26:25 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:41.828 22:26:25 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:41.828 22:26:25 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.361 22:26:27 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:44.361 22:26:27 setup.sh.driver -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:44.361 22:26:27 setup.sh.driver -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:44.361 22:26:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:44.361 ************************************ 00:02:44.361 START TEST guess_driver 00:02:44.361 ************************************ 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1117 -- # guess_driver 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:44.361 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:44.361 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:44.361 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:44.361 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:44.361 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:44.361 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:44.361 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:44.361 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:44.362 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:44.362 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:44.362 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:44.362 Looking for driver=vfio-pci 00:02:44.362 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:44.362 22:26:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:44.362 22:26:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.362 22:26:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:45.299 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.299 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.299 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.299 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.299 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.299 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.559 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.560 22:26:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.500 22:26:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.500 22:26:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.500 22:26:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.500 22:26:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:46.500 22:26:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:46.500 22:26:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.500 22:26:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.037 00:02:49.037 real 0m4.822s 00:02:49.037 user 0m1.135s 00:02:49.037 sys 0m1.859s 00:02:49.037 22:26:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:49.037 22:26:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:49.037 ************************************ 00:02:49.037 END TEST guess_driver 00:02:49.037 ************************************ 00:02:49.037 22:26:32 setup.sh.driver -- common/autotest_common.sh@1136 -- # return 0 00:02:49.037 00:02:49.037 real 0m7.361s 00:02:49.037 user 0m1.668s 00:02:49.037 sys 0m2.863s 00:02:49.037 22:26:32 setup.sh.driver -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:49.037 22:26:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:49.037 ************************************ 00:02:49.037 END TEST driver 00:02:49.037 ************************************ 00:02:49.037 22:26:32 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:02:49.037 22:26:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:49.038 22:26:32 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:49.038 22:26:32 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:49.038 22:26:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:49.038 ************************************ 00:02:49.038 START TEST devices 00:02:49.038 ************************************ 00:02:49.038 22:26:32 setup.sh.devices -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:49.038 * Looking for test storage... 00:02:49.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:49.038 22:26:32 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:49.038 22:26:32 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:49.038 22:26:32 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.038 22:26:32 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # local nvme bdf 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:50.939 22:26:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:50.939 22:26:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:50.939 No valid GPT data, bailing 00:02:50.939 22:26:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:50.939 22:26:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:50.939 22:26:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:50.939 22:26:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:50.939 22:26:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:50.939 22:26:33 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:50.939 22:26:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:50.939 22:26:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:50.939 ************************************ 00:02:50.939 START TEST nvme_mount 00:02:50.939 ************************************ 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1117 -- # nvme_mount 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:50.939 22:26:34 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:51.550 Creating new GPT entries in memory. 00:02:51.550 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:51.550 other utilities. 00:02:51.550 22:26:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:51.550 22:26:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:51.550 22:26:35 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:51.550 22:26:35 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:51.550 22:26:35 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:52.934 Creating new GPT entries in memory. 00:02:52.934 The operation has completed successfully. 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1121393 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.934 22:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.869 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.870 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.870 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.870 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.870 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.870 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.870 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:54.129 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:54.129 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:54.386 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:54.386 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:54.386 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:54.386 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.386 22:26:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.769 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:55.770 22:26:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.770 22:26:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.145 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.146 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:57.146 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:57.146 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:02:57.146 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:02:57.146 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.146 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:57.146 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:57.146 22:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:57.146 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:57.146 00:02:57.146 real 0m6.426s 00:02:57.146 user 0m1.474s 00:02:57.146 sys 0m2.501s 00:02:57.146 22:26:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:57.146 22:26:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:02:57.146 ************************************ 00:02:57.146 END TEST nvme_mount 00:02:57.146 ************************************ 00:02:57.146 22:26:40 setup.sh.devices -- common/autotest_common.sh@1136 -- # return 0 00:02:57.146 22:26:40 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:02:57.146 22:26:40 setup.sh.devices -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:57.146 22:26:40 setup.sh.devices -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:57.146 22:26:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:57.146 ************************************ 00:02:57.146 START TEST dm_mount 00:02:57.146 ************************************ 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1117 -- # dm_mount 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:57.146 22:26:40 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:02:58.080 Creating new GPT entries in memory. 00:02:58.080 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:58.080 other utilities. 00:02:58.080 22:26:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:58.080 22:26:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:58.080 22:26:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:58.080 22:26:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:58.080 22:26:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:59.015 Creating new GPT entries in memory. 00:02:59.015 The operation has completed successfully. 00:02:59.015 22:26:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:59.015 22:26:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:59.015 22:26:42 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:59.015 22:26:42 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:59.015 22:26:42 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:00.391 The operation has completed successfully. 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1123776 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:00.391 22:26:43 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.392 22:26:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.349 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.350 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.610 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:01.610 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:01.610 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.610 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:01.610 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:01.610 22:26:44 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.610 22:26:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.987 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.987 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:02.987 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:02.987 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.987 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.987 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.987 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:02.988 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:02.988 00:03:02.988 real 0m5.868s 00:03:02.988 user 0m1.039s 00:03:02.988 sys 0m1.657s 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:02.988 22:26:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:02.988 ************************************ 00:03:02.988 END TEST dm_mount 00:03:02.988 ************************************ 00:03:02.988 22:26:46 setup.sh.devices -- common/autotest_common.sh@1136 -- # return 0 00:03:02.988 22:26:46 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:02.988 22:26:46 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:02.988 22:26:46 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.988 22:26:46 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:02.988 22:26:46 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:02.988 22:26:46 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:02.988 22:26:46 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:03.247 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:03.247 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:03.247 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:03.247 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:03.247 22:26:46 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:03.247 22:26:46 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.247 22:26:46 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:03.247 22:26:46 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:03.247 22:26:46 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:03.247 22:26:46 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:03.247 22:26:46 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:03.247 00:03:03.247 real 0m14.166s 00:03:03.247 user 0m3.165s 00:03:03.247 sys 0m5.144s 00:03:03.247 22:26:46 setup.sh.devices -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:03.247 22:26:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:03.247 ************************************ 00:03:03.247 END TEST devices 00:03:03.247 ************************************ 00:03:03.247 22:26:46 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:03:03.247 00:03:03.247 real 0m42.191s 00:03:03.247 user 0m11.953s 00:03:03.247 sys 0m18.453s 00:03:03.247 22:26:46 setup.sh -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:03.247 22:26:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:03.247 ************************************ 00:03:03.247 END TEST setup.sh 00:03:03.247 ************************************ 00:03:03.247 22:26:46 -- common/autotest_common.sh@1136 -- # return 0 00:03:03.247 22:26:46 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:04.624 Hugepages 00:03:04.624 node hugesize free / total 00:03:04.624 node0 1048576kB 0 / 0 00:03:04.624 node0 2048kB 1024 / 1024 00:03:04.624 node1 1048576kB 0 / 0 00:03:04.624 node1 2048kB 1024 / 1024 00:03:04.624 00:03:04.624 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.624 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:04.624 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:04.624 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:04.624 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:04.624 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:04.624 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:04.624 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:04.624 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:04.624 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:04.624 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:04.624 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:04.624 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:04.624 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:04.624 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:04.624 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:04.624 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:04.624 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:04.624 22:26:47 -- spdk/autotest.sh@130 -- # uname -s 00:03:04.624 22:26:47 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:04.624 22:26:47 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:04.624 22:26:47 -- common/autotest_common.sh@1525 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.563 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:05.563 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:05.563 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:05.563 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:05.563 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:05.563 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:05.563 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:05.563 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:05.563 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:05.563 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:05.823 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:05.823 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:05.823 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:05.823 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:05.823 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:05.824 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:06.763 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.763 22:26:50 -- common/autotest_common.sh@1526 -- # sleep 1 00:03:07.703 22:26:51 -- common/autotest_common.sh@1527 -- # bdfs=() 00:03:07.703 22:26:51 -- common/autotest_common.sh@1527 -- # local bdfs 00:03:07.703 22:26:51 -- common/autotest_common.sh@1528 -- # bdfs=($(get_nvme_bdfs)) 00:03:07.703 22:26:51 -- common/autotest_common.sh@1528 -- # get_nvme_bdfs 00:03:07.703 22:26:51 -- common/autotest_common.sh@1507 -- # bdfs=() 00:03:07.703 22:26:51 -- common/autotest_common.sh@1507 -- # local bdfs 00:03:07.703 22:26:51 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:07.703 22:26:51 -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:07.703 22:26:51 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:03:07.962 22:26:51 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:03:07.962 22:26:51 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:88:00.0 00:03:07.962 22:26:51 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.900 Waiting for block devices as requested 00:03:08.900 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:09.159 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:09.159 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:09.159 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:09.419 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:09.419 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:09.419 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:09.419 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:09.683 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:09.683 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:09.683 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:09.683 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:09.942 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:09.942 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:09.942 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:10.200 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:10.200 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:10.200 22:26:53 -- common/autotest_common.sh@1532 -- # for bdf in "${bdfs[@]}" 00:03:10.200 22:26:53 -- common/autotest_common.sh@1533 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:10.200 22:26:53 -- common/autotest_common.sh@1496 -- # readlink -f /sys/class/nvme/nvme0 00:03:10.200 22:26:53 -- common/autotest_common.sh@1496 -- # grep 0000:88:00.0/nvme/nvme 00:03:10.200 22:26:53 -- common/autotest_common.sh@1496 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:10.200 22:26:53 -- common/autotest_common.sh@1497 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:10.200 22:26:53 -- common/autotest_common.sh@1501 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:10.200 22:26:53 -- common/autotest_common.sh@1501 -- # printf '%s\n' nvme0 00:03:10.200 22:26:53 -- common/autotest_common.sh@1533 -- # nvme_ctrlr=/dev/nvme0 00:03:10.200 22:26:53 -- common/autotest_common.sh@1534 -- # [[ -z /dev/nvme0 ]] 00:03:10.200 22:26:53 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:10.200 22:26:53 -- common/autotest_common.sh@1539 -- # grep oacs 00:03:10.200 22:26:53 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:10.200 22:26:53 -- common/autotest_common.sh@1539 -- # oacs=' 0xf' 00:03:10.200 22:26:53 -- common/autotest_common.sh@1540 -- # oacs_ns_manage=8 00:03:10.200 22:26:53 -- common/autotest_common.sh@1542 -- # [[ 8 -ne 0 ]] 00:03:10.200 22:26:53 -- common/autotest_common.sh@1548 -- # nvme id-ctrl /dev/nvme0 00:03:10.200 22:26:53 -- common/autotest_common.sh@1548 -- # grep unvmcap 00:03:10.200 22:26:53 -- common/autotest_common.sh@1548 -- # cut -d: -f2 00:03:10.200 22:26:53 -- common/autotest_common.sh@1548 -- # unvmcap=' 0' 00:03:10.200 22:26:53 -- common/autotest_common.sh@1549 -- # [[ 0 -eq 0 ]] 00:03:10.200 22:26:53 -- common/autotest_common.sh@1551 -- # continue 00:03:10.200 22:26:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:10.200 22:26:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:10.200 22:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:10.456 22:26:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:10.456 22:26:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:10.456 22:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:10.456 22:26:53 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.834 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:11.834 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:11.834 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:11.834 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:11.834 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:11.834 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:11.834 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:11.834 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:11.834 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:11.834 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:11.834 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:11.834 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:11.834 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:11.834 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:11.834 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:11.834 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:12.774 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:12.774 22:26:56 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:12.774 22:26:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:12.774 22:26:56 -- common/autotest_common.sh@10 -- # set +x 00:03:12.774 22:26:56 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:12.774 22:26:56 -- common/autotest_common.sh@1585 -- # mapfile -t bdfs 00:03:12.774 22:26:56 -- common/autotest_common.sh@1585 -- # get_nvme_bdfs_by_id 0x0a54 00:03:12.774 22:26:56 -- common/autotest_common.sh@1571 -- # bdfs=() 00:03:12.774 22:26:56 -- common/autotest_common.sh@1571 -- # local bdfs 00:03:12.774 22:26:56 -- common/autotest_common.sh@1573 -- # get_nvme_bdfs 00:03:12.774 22:26:56 -- common/autotest_common.sh@1507 -- # bdfs=() 00:03:12.774 22:26:56 -- common/autotest_common.sh@1507 -- # local bdfs 00:03:12.774 22:26:56 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:12.774 22:26:56 -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:12.774 22:26:56 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:03:12.774 22:26:56 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:03:12.774 22:26:56 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:88:00.0 00:03:12.774 22:26:56 -- common/autotest_common.sh@1573 -- # for bdf in $(get_nvme_bdfs) 00:03:12.774 22:26:56 -- common/autotest_common.sh@1574 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:12.774 22:26:56 -- common/autotest_common.sh@1574 -- # device=0x0a54 00:03:12.774 22:26:56 -- common/autotest_common.sh@1575 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:12.774 22:26:56 -- common/autotest_common.sh@1576 -- # bdfs+=($bdf) 00:03:12.774 22:26:56 -- common/autotest_common.sh@1580 -- # printf '%s\n' 0000:88:00.0 00:03:12.774 22:26:56 -- common/autotest_common.sh@1586 -- # [[ -z 0000:88:00.0 ]] 00:03:12.774 22:26:56 -- common/autotest_common.sh@1591 -- # spdk_tgt_pid=1128956 00:03:12.774 22:26:56 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:12.774 22:26:56 -- common/autotest_common.sh@1592 -- # waitforlisten 1128956 00:03:12.774 22:26:56 -- common/autotest_common.sh@823 -- # '[' -z 1128956 ']' 00:03:12.774 22:26:56 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:12.774 22:26:56 -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:12.774 22:26:56 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:12.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:12.774 22:26:56 -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:12.774 22:26:56 -- common/autotest_common.sh@10 -- # set +x 00:03:12.774 [2024-07-15 22:26:56.250745] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:03:12.774 [2024-07-15 22:26:56.250848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128956 ] 00:03:13.034 [2024-07-15 22:26:56.309533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:13.034 [2024-07-15 22:26:56.419097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:13.293 22:26:56 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:03:13.293 22:26:56 -- common/autotest_common.sh@856 -- # return 0 00:03:13.293 22:26:56 -- common/autotest_common.sh@1594 -- # bdf_id=0 00:03:13.293 22:26:56 -- common/autotest_common.sh@1595 -- # for bdf in "${bdfs[@]}" 00:03:13.293 22:26:56 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:16.592 nvme0n1 00:03:16.592 22:26:59 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:16.592 [2024-07-15 22:27:00.008366] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:16.592 [2024-07-15 22:27:00.008417] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:16.592 request: 00:03:16.592 { 00:03:16.592 "nvme_ctrlr_name": "nvme0", 00:03:16.592 "password": "test", 00:03:16.592 "method": "bdev_nvme_opal_revert", 00:03:16.592 "req_id": 1 00:03:16.592 } 00:03:16.592 Got JSON-RPC error response 00:03:16.592 response: 00:03:16.592 { 00:03:16.592 "code": -32603, 00:03:16.592 "message": "Internal error" 00:03:16.592 } 00:03:16.592 22:27:00 -- common/autotest_common.sh@1598 -- # true 00:03:16.592 22:27:00 -- common/autotest_common.sh@1599 -- # (( ++bdf_id )) 00:03:16.592 22:27:00 -- common/autotest_common.sh@1602 -- # killprocess 1128956 00:03:16.592 22:27:00 -- common/autotest_common.sh@942 -- # '[' -z 1128956 ']' 00:03:16.592 22:27:00 -- common/autotest_common.sh@946 -- # kill -0 1128956 00:03:16.592 22:27:00 -- common/autotest_common.sh@947 -- # uname 00:03:16.592 22:27:00 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:03:16.592 22:27:00 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1128956 00:03:16.592 22:27:00 -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:03:16.592 22:27:00 -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:03:16.592 22:27:00 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1128956' 00:03:16.592 killing process with pid 1128956 00:03:16.592 22:27:00 -- common/autotest_common.sh@961 -- # kill 1128956 00:03:16.592 22:27:00 -- common/autotest_common.sh@966 -- # wait 1128956 00:03:18.493 22:27:01 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:18.493 22:27:01 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:18.493 22:27:01 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:18.493 22:27:01 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:18.493 22:27:01 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:18.493 22:27:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:18.493 22:27:01 -- common/autotest_common.sh@10 -- # set +x 00:03:18.493 22:27:01 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:18.493 22:27:01 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:18.493 22:27:01 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:18.493 22:27:01 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:18.493 22:27:01 -- common/autotest_common.sh@10 -- # set +x 00:03:18.493 ************************************ 00:03:18.493 START TEST env 00:03:18.493 ************************************ 00:03:18.493 22:27:01 env -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:18.493 * Looking for test storage... 00:03:18.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:18.493 22:27:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:18.493 22:27:01 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:18.493 22:27:01 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:18.493 22:27:01 env -- common/autotest_common.sh@10 -- # set +x 00:03:18.493 ************************************ 00:03:18.493 START TEST env_memory 00:03:18.493 ************************************ 00:03:18.493 22:27:01 env.env_memory -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:18.493 00:03:18.493 00:03:18.493 CUnit - A unit testing framework for C - Version 2.1-3 00:03:18.493 http://cunit.sourceforge.net/ 00:03:18.493 00:03:18.493 00:03:18.493 Suite: memory 00:03:18.493 Test: alloc and free memory map ...[2024-07-15 22:27:01.981768] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:18.493 passed 00:03:18.752 Test: mem map translation ...[2024-07-15 22:27:02.002800] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:18.752 [2024-07-15 22:27:02.002822] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:18.752 [2024-07-15 22:27:02.002873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:18.752 [2024-07-15 22:27:02.002890] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:18.752 passed 00:03:18.752 Test: mem map registration ...[2024-07-15 22:27:02.044711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:18.752 [2024-07-15 22:27:02.044731] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:18.752 passed 00:03:18.752 Test: mem map adjacent registrations ...passed 00:03:18.752 00:03:18.752 Run Summary: Type Total Ran Passed Failed Inactive 00:03:18.752 suites 1 1 n/a 0 0 00:03:18.752 tests 4 4 4 0 0 00:03:18.752 asserts 152 152 152 0 n/a 00:03:18.752 00:03:18.752 Elapsed time = 0.145 seconds 00:03:18.752 00:03:18.752 real 0m0.153s 00:03:18.752 user 0m0.146s 00:03:18.752 sys 0m0.006s 00:03:18.752 22:27:02 env.env_memory -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:18.752 22:27:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:18.752 ************************************ 00:03:18.752 END TEST env_memory 00:03:18.752 ************************************ 00:03:18.752 22:27:02 env -- common/autotest_common.sh@1136 -- # return 0 00:03:18.752 22:27:02 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:18.752 22:27:02 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:18.752 22:27:02 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:18.752 22:27:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:18.752 ************************************ 00:03:18.752 START TEST env_vtophys 00:03:18.752 ************************************ 00:03:18.752 22:27:02 env.env_vtophys -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:18.752 EAL: lib.eal log level changed from notice to debug 00:03:18.752 EAL: Detected lcore 0 as core 0 on socket 0 00:03:18.752 EAL: Detected lcore 1 as core 1 on socket 0 00:03:18.752 EAL: Detected lcore 2 as core 2 on socket 0 00:03:18.752 EAL: Detected lcore 3 as core 3 on socket 0 00:03:18.752 EAL: Detected lcore 4 as core 4 on socket 0 00:03:18.752 EAL: Detected lcore 5 as core 5 on socket 0 00:03:18.752 EAL: Detected lcore 6 as core 8 on socket 0 00:03:18.752 EAL: Detected lcore 7 as core 9 on socket 0 00:03:18.752 EAL: Detected lcore 8 as core 10 on socket 0 00:03:18.752 EAL: Detected lcore 9 as core 11 on socket 0 00:03:18.752 EAL: Detected lcore 10 as core 12 on socket 0 00:03:18.752 EAL: Detected lcore 11 as core 13 on socket 0 00:03:18.752 EAL: Detected lcore 12 as core 0 on socket 1 00:03:18.752 EAL: Detected lcore 13 as core 1 on socket 1 00:03:18.752 EAL: Detected lcore 14 as core 2 on socket 1 00:03:18.752 EAL: Detected lcore 15 as core 3 on socket 1 00:03:18.752 EAL: Detected lcore 16 as core 4 on socket 1 00:03:18.752 EAL: Detected lcore 17 as core 5 on socket 1 00:03:18.752 EAL: Detected lcore 18 as core 8 on socket 1 00:03:18.752 EAL: Detected lcore 19 as core 9 on socket 1 00:03:18.752 EAL: Detected lcore 20 as core 10 on socket 1 00:03:18.752 EAL: Detected lcore 21 as core 11 on socket 1 00:03:18.752 EAL: Detected lcore 22 as core 12 on socket 1 00:03:18.752 EAL: Detected lcore 23 as core 13 on socket 1 00:03:18.752 EAL: Detected lcore 24 as core 0 on socket 0 00:03:18.752 EAL: Detected lcore 25 as core 1 on socket 0 00:03:18.752 EAL: Detected lcore 26 as core 2 on socket 0 00:03:18.752 EAL: Detected lcore 27 as core 3 on socket 0 00:03:18.752 EAL: Detected lcore 28 as core 4 on socket 0 00:03:18.752 EAL: Detected lcore 29 as core 5 on socket 0 00:03:18.752 EAL: Detected lcore 30 as core 8 on socket 0 00:03:18.752 EAL: Detected lcore 31 as core 9 on socket 0 00:03:18.752 EAL: Detected lcore 32 as core 10 on socket 0 00:03:18.752 EAL: Detected lcore 33 as core 11 on socket 0 00:03:18.752 EAL: Detected lcore 34 as core 12 on socket 0 00:03:18.752 EAL: Detected lcore 35 as core 13 on socket 0 00:03:18.752 EAL: Detected lcore 36 as core 0 on socket 1 00:03:18.752 EAL: Detected lcore 37 as core 1 on socket 1 00:03:18.752 EAL: Detected lcore 38 as core 2 on socket 1 00:03:18.752 EAL: Detected lcore 39 as core 3 on socket 1 00:03:18.752 EAL: Detected lcore 40 as core 4 on socket 1 00:03:18.753 EAL: Detected lcore 41 as core 5 on socket 1 00:03:18.753 EAL: Detected lcore 42 as core 8 on socket 1 00:03:18.753 EAL: Detected lcore 43 as core 9 on socket 1 00:03:18.753 EAL: Detected lcore 44 as core 10 on socket 1 00:03:18.753 EAL: Detected lcore 45 as core 11 on socket 1 00:03:18.753 EAL: Detected lcore 46 as core 12 on socket 1 00:03:18.753 EAL: Detected lcore 47 as core 13 on socket 1 00:03:18.753 EAL: Maximum logical cores by configuration: 128 00:03:18.753 EAL: Detected CPU lcores: 48 00:03:18.753 EAL: Detected NUMA nodes: 2 00:03:18.753 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:18.753 EAL: Detected shared linkage of DPDK 00:03:18.753 EAL: No shared files mode enabled, IPC will be disabled 00:03:18.753 EAL: Bus pci wants IOVA as 'DC' 00:03:18.753 EAL: Buses did not request a specific IOVA mode. 00:03:18.753 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:18.753 EAL: Selected IOVA mode 'VA' 00:03:18.753 EAL: Probing VFIO support... 00:03:18.753 EAL: IOMMU type 1 (Type 1) is supported 00:03:18.753 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:18.753 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:18.753 EAL: VFIO support initialized 00:03:18.753 EAL: Ask a virtual area of 0x2e000 bytes 00:03:18.753 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:18.753 EAL: Setting up physically contiguous memory... 00:03:18.753 EAL: Setting maximum number of open files to 524288 00:03:18.753 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:18.753 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:18.753 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:18.753 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.753 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:18.753 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.753 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.753 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:18.753 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:18.753 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.753 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:18.753 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.753 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.753 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:18.753 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:18.753 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.753 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:18.753 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.753 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.753 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:18.753 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:18.753 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.753 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:18.753 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.753 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.753 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:18.753 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:18.753 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:18.753 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.753 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:18.753 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.753 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.753 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:18.753 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:18.753 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.753 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:18.753 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.753 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.753 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:18.753 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:18.753 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.753 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:18.753 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.753 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.753 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:18.753 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:18.753 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.753 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:18.753 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.753 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.753 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:18.753 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:18.753 EAL: Hugepages will be freed exactly as allocated. 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.753 EAL: TSC frequency is ~2700000 KHz 00:03:18.753 EAL: Main lcore 0 is ready (tid=7f7e6af3aa00;cpuset=[0]) 00:03:18.753 EAL: Trying to obtain current memory policy. 00:03:18.753 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.753 EAL: Restoring previous memory policy: 0 00:03:18.753 EAL: request: mp_malloc_sync 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.753 EAL: Heap on socket 0 was expanded by 2MB 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.753 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:18.753 EAL: Mem event callback 'spdk:(nil)' registered 00:03:18.753 00:03:18.753 00:03:18.753 CUnit - A unit testing framework for C - Version 2.1-3 00:03:18.753 http://cunit.sourceforge.net/ 00:03:18.753 00:03:18.753 00:03:18.753 Suite: components_suite 00:03:18.753 Test: vtophys_malloc_test ...passed 00:03:18.753 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:18.753 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.753 EAL: Restoring previous memory policy: 4 00:03:18.753 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.753 EAL: request: mp_malloc_sync 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.753 EAL: Heap on socket 0 was expanded by 4MB 00:03:18.753 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.753 EAL: request: mp_malloc_sync 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.753 EAL: Heap on socket 0 was shrunk by 4MB 00:03:18.753 EAL: Trying to obtain current memory policy. 00:03:18.753 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.753 EAL: Restoring previous memory policy: 4 00:03:18.753 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.753 EAL: request: mp_malloc_sync 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.753 EAL: Heap on socket 0 was expanded by 6MB 00:03:18.753 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.753 EAL: request: mp_malloc_sync 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.753 EAL: Heap on socket 0 was shrunk by 6MB 00:03:18.753 EAL: Trying to obtain current memory policy. 00:03:18.753 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.753 EAL: Restoring previous memory policy: 4 00:03:18.753 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.753 EAL: request: mp_malloc_sync 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.753 EAL: Heap on socket 0 was expanded by 10MB 00:03:18.753 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.753 EAL: request: mp_malloc_sync 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.753 EAL: Heap on socket 0 was shrunk by 10MB 00:03:18.753 EAL: Trying to obtain current memory policy. 00:03:18.753 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.753 EAL: Restoring previous memory policy: 4 00:03:18.753 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.753 EAL: request: mp_malloc_sync 00:03:18.753 EAL: No shared files mode enabled, IPC is disabled 00:03:18.754 EAL: Heap on socket 0 was expanded by 18MB 00:03:18.754 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.754 EAL: request: mp_malloc_sync 00:03:18.754 EAL: No shared files mode enabled, IPC is disabled 00:03:18.754 EAL: Heap on socket 0 was shrunk by 18MB 00:03:18.754 EAL: Trying to obtain current memory policy. 00:03:18.754 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.754 EAL: Restoring previous memory policy: 4 00:03:18.754 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.754 EAL: request: mp_malloc_sync 00:03:18.754 EAL: No shared files mode enabled, IPC is disabled 00:03:18.754 EAL: Heap on socket 0 was expanded by 34MB 00:03:18.754 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.013 EAL: request: mp_malloc_sync 00:03:19.013 EAL: No shared files mode enabled, IPC is disabled 00:03:19.013 EAL: Heap on socket 0 was shrunk by 34MB 00:03:19.013 EAL: Trying to obtain current memory policy. 00:03:19.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:19.013 EAL: Restoring previous memory policy: 4 00:03:19.013 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.013 EAL: request: mp_malloc_sync 00:03:19.013 EAL: No shared files mode enabled, IPC is disabled 00:03:19.013 EAL: Heap on socket 0 was expanded by 66MB 00:03:19.013 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.013 EAL: request: mp_malloc_sync 00:03:19.013 EAL: No shared files mode enabled, IPC is disabled 00:03:19.013 EAL: Heap on socket 0 was shrunk by 66MB 00:03:19.013 EAL: Trying to obtain current memory policy. 00:03:19.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:19.013 EAL: Restoring previous memory policy: 4 00:03:19.013 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.013 EAL: request: mp_malloc_sync 00:03:19.013 EAL: No shared files mode enabled, IPC is disabled 00:03:19.013 EAL: Heap on socket 0 was expanded by 130MB 00:03:19.013 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.013 EAL: request: mp_malloc_sync 00:03:19.013 EAL: No shared files mode enabled, IPC is disabled 00:03:19.013 EAL: Heap on socket 0 was shrunk by 130MB 00:03:19.013 EAL: Trying to obtain current memory policy. 00:03:19.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:19.013 EAL: Restoring previous memory policy: 4 00:03:19.013 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.013 EAL: request: mp_malloc_sync 00:03:19.013 EAL: No shared files mode enabled, IPC is disabled 00:03:19.013 EAL: Heap on socket 0 was expanded by 258MB 00:03:19.013 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.271 EAL: request: mp_malloc_sync 00:03:19.271 EAL: No shared files mode enabled, IPC is disabled 00:03:19.271 EAL: Heap on socket 0 was shrunk by 258MB 00:03:19.271 EAL: Trying to obtain current memory policy. 00:03:19.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:19.271 EAL: Restoring previous memory policy: 4 00:03:19.271 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.271 EAL: request: mp_malloc_sync 00:03:19.271 EAL: No shared files mode enabled, IPC is disabled 00:03:19.271 EAL: Heap on socket 0 was expanded by 514MB 00:03:19.530 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.530 EAL: request: mp_malloc_sync 00:03:19.530 EAL: No shared files mode enabled, IPC is disabled 00:03:19.530 EAL: Heap on socket 0 was shrunk by 514MB 00:03:19.530 EAL: Trying to obtain current memory policy. 00:03:19.530 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:19.789 EAL: Restoring previous memory policy: 4 00:03:19.789 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.789 EAL: request: mp_malloc_sync 00:03:19.789 EAL: No shared files mode enabled, IPC is disabled 00:03:19.789 EAL: Heap on socket 0 was expanded by 1026MB 00:03:20.048 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.305 EAL: request: mp_malloc_sync 00:03:20.305 EAL: No shared files mode enabled, IPC is disabled 00:03:20.305 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:20.305 passed 00:03:20.305 00:03:20.305 Run Summary: Type Total Ran Passed Failed Inactive 00:03:20.305 suites 1 1 n/a 0 0 00:03:20.305 tests 2 2 2 0 0 00:03:20.305 asserts 497 497 497 0 n/a 00:03:20.305 00:03:20.305 Elapsed time = 1.380 seconds 00:03:20.305 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.305 EAL: request: mp_malloc_sync 00:03:20.305 EAL: No shared files mode enabled, IPC is disabled 00:03:20.305 EAL: Heap on socket 0 was shrunk by 2MB 00:03:20.305 EAL: No shared files mode enabled, IPC is disabled 00:03:20.305 EAL: No shared files mode enabled, IPC is disabled 00:03:20.305 EAL: No shared files mode enabled, IPC is disabled 00:03:20.305 00:03:20.305 real 0m1.501s 00:03:20.305 user 0m0.854s 00:03:20.305 sys 0m0.611s 00:03:20.305 22:27:03 env.env_vtophys -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:20.305 22:27:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:20.305 ************************************ 00:03:20.305 END TEST env_vtophys 00:03:20.305 ************************************ 00:03:20.305 22:27:03 env -- common/autotest_common.sh@1136 -- # return 0 00:03:20.305 22:27:03 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:20.305 22:27:03 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:20.305 22:27:03 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:20.305 22:27:03 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.305 ************************************ 00:03:20.305 START TEST env_pci 00:03:20.305 ************************************ 00:03:20.305 22:27:03 env.env_pci -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:20.305 00:03:20.305 00:03:20.305 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.305 http://cunit.sourceforge.net/ 00:03:20.305 00:03:20.305 00:03:20.305 Suite: pci 00:03:20.305 Test: pci_hook ...[2024-07-15 22:27:03.708932] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1129957 has claimed it 00:03:20.305 EAL: Cannot find device (10000:00:01.0) 00:03:20.305 EAL: Failed to attach device on primary process 00:03:20.305 passed 00:03:20.305 00:03:20.305 Run Summary: Type Total Ran Passed Failed Inactive 00:03:20.305 suites 1 1 n/a 0 0 00:03:20.305 tests 1 1 1 0 0 00:03:20.305 asserts 25 25 25 0 n/a 00:03:20.305 00:03:20.305 Elapsed time = 0.022 seconds 00:03:20.305 00:03:20.305 real 0m0.036s 00:03:20.305 user 0m0.013s 00:03:20.305 sys 0m0.022s 00:03:20.305 22:27:03 env.env_pci -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:20.305 22:27:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:20.305 ************************************ 00:03:20.305 END TEST env_pci 00:03:20.305 ************************************ 00:03:20.305 22:27:03 env -- common/autotest_common.sh@1136 -- # return 0 00:03:20.305 22:27:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:20.305 22:27:03 env -- env/env.sh@15 -- # uname 00:03:20.305 22:27:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:20.305 22:27:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:20.305 22:27:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:20.305 22:27:03 env -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:03:20.305 22:27:03 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:20.305 22:27:03 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.305 ************************************ 00:03:20.305 START TEST env_dpdk_post_init 00:03:20.305 ************************************ 00:03:20.305 22:27:03 env.env_dpdk_post_init -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:20.562 EAL: Detected CPU lcores: 48 00:03:20.562 EAL: Detected NUMA nodes: 2 00:03:20.562 EAL: Detected shared linkage of DPDK 00:03:20.562 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:20.562 EAL: Selected IOVA mode 'VA' 00:03:20.562 EAL: VFIO support initialized 00:03:20.562 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:20.562 EAL: Using IOMMU type 1 (Type 1) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:20.562 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:20.819 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:20.819 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:21.384 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:24.665 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:24.665 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:24.923 Starting DPDK initialization... 00:03:24.923 Starting SPDK post initialization... 00:03:24.923 SPDK NVMe probe 00:03:24.923 Attaching to 0000:88:00.0 00:03:24.923 Attached to 0000:88:00.0 00:03:24.923 Cleaning up... 00:03:24.923 00:03:24.923 real 0m4.388s 00:03:24.923 user 0m3.270s 00:03:24.923 sys 0m0.177s 00:03:24.923 22:27:08 env.env_dpdk_post_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:24.923 22:27:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:24.923 ************************************ 00:03:24.923 END TEST env_dpdk_post_init 00:03:24.923 ************************************ 00:03:24.923 22:27:08 env -- common/autotest_common.sh@1136 -- # return 0 00:03:24.923 22:27:08 env -- env/env.sh@26 -- # uname 00:03:24.923 22:27:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:24.923 22:27:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:24.923 22:27:08 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:24.923 22:27:08 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:24.923 22:27:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.923 ************************************ 00:03:24.923 START TEST env_mem_callbacks 00:03:24.923 ************************************ 00:03:24.923 22:27:08 env.env_mem_callbacks -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:24.923 EAL: Detected CPU lcores: 48 00:03:24.923 EAL: Detected NUMA nodes: 2 00:03:24.923 EAL: Detected shared linkage of DPDK 00:03:24.923 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:24.923 EAL: Selected IOVA mode 'VA' 00:03:24.923 EAL: VFIO support initialized 00:03:24.923 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:24.923 00:03:24.923 00:03:24.923 CUnit - A unit testing framework for C - Version 2.1-3 00:03:24.923 http://cunit.sourceforge.net/ 00:03:24.923 00:03:24.923 00:03:24.923 Suite: memory 00:03:24.923 Test: test ... 00:03:24.924 register 0x200000200000 2097152 00:03:24.924 malloc 3145728 00:03:24.924 register 0x200000400000 4194304 00:03:24.924 buf 0x200000500000 len 3145728 PASSED 00:03:24.924 malloc 64 00:03:24.924 buf 0x2000004fff40 len 64 PASSED 00:03:24.924 malloc 4194304 00:03:24.924 register 0x200000800000 6291456 00:03:24.924 buf 0x200000a00000 len 4194304 PASSED 00:03:24.924 free 0x200000500000 3145728 00:03:24.924 free 0x2000004fff40 64 00:03:24.924 unregister 0x200000400000 4194304 PASSED 00:03:24.924 free 0x200000a00000 4194304 00:03:24.924 unregister 0x200000800000 6291456 PASSED 00:03:24.924 malloc 8388608 00:03:24.924 register 0x200000400000 10485760 00:03:24.924 buf 0x200000600000 len 8388608 PASSED 00:03:24.924 free 0x200000600000 8388608 00:03:24.924 unregister 0x200000400000 10485760 PASSED 00:03:24.924 passed 00:03:24.924 00:03:24.924 Run Summary: Type Total Ran Passed Failed Inactive 00:03:24.924 suites 1 1 n/a 0 0 00:03:24.924 tests 1 1 1 0 0 00:03:24.924 asserts 15 15 15 0 n/a 00:03:24.924 00:03:24.924 Elapsed time = 0.005 seconds 00:03:24.924 00:03:24.924 real 0m0.049s 00:03:24.924 user 0m0.015s 00:03:24.924 sys 0m0.034s 00:03:24.924 22:27:08 env.env_mem_callbacks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:24.924 22:27:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:24.924 ************************************ 00:03:24.924 END TEST env_mem_callbacks 00:03:24.924 ************************************ 00:03:24.924 22:27:08 env -- common/autotest_common.sh@1136 -- # return 0 00:03:24.924 00:03:24.924 real 0m6.419s 00:03:24.924 user 0m4.431s 00:03:24.924 sys 0m1.026s 00:03:24.924 22:27:08 env -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:24.924 22:27:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.924 ************************************ 00:03:24.924 END TEST env 00:03:24.924 ************************************ 00:03:24.924 22:27:08 -- common/autotest_common.sh@1136 -- # return 0 00:03:24.924 22:27:08 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:24.924 22:27:08 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:24.924 22:27:08 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:24.924 22:27:08 -- common/autotest_common.sh@10 -- # set +x 00:03:24.924 ************************************ 00:03:24.924 START TEST rpc 00:03:24.924 ************************************ 00:03:24.924 22:27:08 rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:24.924 * Looking for test storage... 00:03:24.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:24.924 22:27:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1131040 00:03:24.924 22:27:08 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:24.924 22:27:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:24.924 22:27:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1131040 00:03:24.924 22:27:08 rpc -- common/autotest_common.sh@823 -- # '[' -z 1131040 ']' 00:03:24.924 22:27:08 rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:24.924 22:27:08 rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:24.924 22:27:08 rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:24.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:24.924 22:27:08 rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:24.924 22:27:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.183 [2024-07-15 22:27:08.443962] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:03:25.183 [2024-07-15 22:27:08.444060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131040 ] 00:03:25.183 [2024-07-15 22:27:08.503944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.183 [2024-07-15 22:27:08.613429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:25.183 [2024-07-15 22:27:08.613503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1131040' to capture a snapshot of events at runtime. 00:03:25.183 [2024-07-15 22:27:08.613529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:25.184 [2024-07-15 22:27:08.613540] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:25.184 [2024-07-15 22:27:08.613558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1131040 for offline analysis/debug. 00:03:25.184 [2024-07-15 22:27:08.613584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:25.442 22:27:08 rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:03:25.442 22:27:08 rpc -- common/autotest_common.sh@856 -- # return 0 00:03:25.442 22:27:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:25.442 22:27:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:25.442 22:27:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:25.442 22:27:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:25.442 22:27:08 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:25.442 22:27:08 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:25.442 22:27:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.442 ************************************ 00:03:25.442 START TEST rpc_integrity 00:03:25.442 ************************************ 00:03:25.442 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:03:25.442 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:25.442 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.442 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.442 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.442 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:25.442 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:25.700 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:25.700 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:25.700 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.700 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.700 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.700 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:25.700 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:25.700 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.700 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.700 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.700 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:25.700 { 00:03:25.700 "name": "Malloc0", 00:03:25.700 "aliases": [ 00:03:25.700 "14c81ae6-e53a-4d4a-bf2d-812863dd6bf6" 00:03:25.700 ], 00:03:25.700 "product_name": "Malloc disk", 00:03:25.700 "block_size": 512, 00:03:25.700 "num_blocks": 16384, 00:03:25.700 "uuid": "14c81ae6-e53a-4d4a-bf2d-812863dd6bf6", 00:03:25.700 "assigned_rate_limits": { 00:03:25.700 "rw_ios_per_sec": 0, 00:03:25.700 "rw_mbytes_per_sec": 0, 00:03:25.700 "r_mbytes_per_sec": 0, 00:03:25.700 "w_mbytes_per_sec": 0 00:03:25.700 }, 00:03:25.700 "claimed": false, 00:03:25.700 "zoned": false, 00:03:25.700 "supported_io_types": { 00:03:25.700 "read": true, 00:03:25.700 "write": true, 00:03:25.700 "unmap": true, 00:03:25.700 "flush": true, 00:03:25.700 "reset": true, 00:03:25.700 "nvme_admin": false, 00:03:25.700 "nvme_io": false, 00:03:25.700 "nvme_io_md": false, 00:03:25.700 "write_zeroes": true, 00:03:25.700 "zcopy": true, 00:03:25.700 "get_zone_info": false, 00:03:25.700 "zone_management": false, 00:03:25.700 "zone_append": false, 00:03:25.700 "compare": false, 00:03:25.700 "compare_and_write": false, 00:03:25.700 "abort": true, 00:03:25.700 "seek_hole": false, 00:03:25.700 "seek_data": false, 00:03:25.700 "copy": true, 00:03:25.700 "nvme_iov_md": false 00:03:25.700 }, 00:03:25.700 "memory_domains": [ 00:03:25.700 { 00:03:25.700 "dma_device_id": "system", 00:03:25.700 "dma_device_type": 1 00:03:25.700 }, 00:03:25.700 { 00:03:25.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.700 "dma_device_type": 2 00:03:25.700 } 00:03:25.700 ], 00:03:25.700 "driver_specific": {} 00:03:25.700 } 00:03:25.700 ]' 00:03:25.700 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:25.700 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:25.700 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:25.700 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.700 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.700 [2024-07-15 22:27:09.005591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:25.700 [2024-07-15 22:27:09.005636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:25.700 [2024-07-15 22:27:09.005660] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x952d50 00:03:25.700 [2024-07-15 22:27:09.005675] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:25.700 [2024-07-15 22:27:09.007201] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:25.700 [2024-07-15 22:27:09.007227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:25.700 Passthru0 00:03:25.700 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.700 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:25.700 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.700 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.700 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.700 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:25.700 { 00:03:25.700 "name": "Malloc0", 00:03:25.700 "aliases": [ 00:03:25.700 "14c81ae6-e53a-4d4a-bf2d-812863dd6bf6" 00:03:25.700 ], 00:03:25.700 "product_name": "Malloc disk", 00:03:25.700 "block_size": 512, 00:03:25.700 "num_blocks": 16384, 00:03:25.700 "uuid": "14c81ae6-e53a-4d4a-bf2d-812863dd6bf6", 00:03:25.700 "assigned_rate_limits": { 00:03:25.700 "rw_ios_per_sec": 0, 00:03:25.700 "rw_mbytes_per_sec": 0, 00:03:25.700 "r_mbytes_per_sec": 0, 00:03:25.700 "w_mbytes_per_sec": 0 00:03:25.700 }, 00:03:25.700 "claimed": true, 00:03:25.700 "claim_type": "exclusive_write", 00:03:25.700 "zoned": false, 00:03:25.700 "supported_io_types": { 00:03:25.700 "read": true, 00:03:25.700 "write": true, 00:03:25.700 "unmap": true, 00:03:25.700 "flush": true, 00:03:25.700 "reset": true, 00:03:25.700 "nvme_admin": false, 00:03:25.700 "nvme_io": false, 00:03:25.700 "nvme_io_md": false, 00:03:25.700 "write_zeroes": true, 00:03:25.700 "zcopy": true, 00:03:25.700 "get_zone_info": false, 00:03:25.700 "zone_management": false, 00:03:25.700 "zone_append": false, 00:03:25.700 "compare": false, 00:03:25.700 "compare_and_write": false, 00:03:25.700 "abort": true, 00:03:25.700 "seek_hole": false, 00:03:25.700 "seek_data": false, 00:03:25.700 "copy": true, 00:03:25.700 "nvme_iov_md": false 00:03:25.700 }, 00:03:25.700 "memory_domains": [ 00:03:25.700 { 00:03:25.700 "dma_device_id": "system", 00:03:25.700 "dma_device_type": 1 00:03:25.700 }, 00:03:25.700 { 00:03:25.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.700 "dma_device_type": 2 00:03:25.700 } 00:03:25.700 ], 00:03:25.700 "driver_specific": {} 00:03:25.700 }, 00:03:25.700 { 00:03:25.700 "name": "Passthru0", 00:03:25.700 "aliases": [ 00:03:25.700 "e7cb6e91-2e3d-51ad-a973-c76eb4f25367" 00:03:25.700 ], 00:03:25.700 "product_name": "passthru", 00:03:25.700 "block_size": 512, 00:03:25.700 "num_blocks": 16384, 00:03:25.700 "uuid": "e7cb6e91-2e3d-51ad-a973-c76eb4f25367", 00:03:25.700 "assigned_rate_limits": { 00:03:25.700 "rw_ios_per_sec": 0, 00:03:25.700 "rw_mbytes_per_sec": 0, 00:03:25.700 "r_mbytes_per_sec": 0, 00:03:25.700 "w_mbytes_per_sec": 0 00:03:25.700 }, 00:03:25.700 "claimed": false, 00:03:25.700 "zoned": false, 00:03:25.700 "supported_io_types": { 00:03:25.700 "read": true, 00:03:25.700 "write": true, 00:03:25.700 "unmap": true, 00:03:25.700 "flush": true, 00:03:25.700 "reset": true, 00:03:25.700 "nvme_admin": false, 00:03:25.700 "nvme_io": false, 00:03:25.700 "nvme_io_md": false, 00:03:25.700 "write_zeroes": true, 00:03:25.700 "zcopy": true, 00:03:25.700 "get_zone_info": false, 00:03:25.700 "zone_management": false, 00:03:25.700 "zone_append": false, 00:03:25.700 "compare": false, 00:03:25.700 "compare_and_write": false, 00:03:25.700 "abort": true, 00:03:25.700 "seek_hole": false, 00:03:25.700 "seek_data": false, 00:03:25.700 "copy": true, 00:03:25.700 "nvme_iov_md": false 00:03:25.700 }, 00:03:25.700 "memory_domains": [ 00:03:25.700 { 00:03:25.700 "dma_device_id": "system", 00:03:25.700 "dma_device_type": 1 00:03:25.700 }, 00:03:25.700 { 00:03:25.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.700 "dma_device_type": 2 00:03:25.700 } 00:03:25.700 ], 00:03:25.700 "driver_specific": { 00:03:25.700 "passthru": { 00:03:25.700 "name": "Passthru0", 00:03:25.700 "base_bdev_name": "Malloc0" 00:03:25.700 } 00:03:25.700 } 00:03:25.700 } 00:03:25.700 ]' 00:03:25.700 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:25.700 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:25.700 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:25.700 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.700 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.700 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.701 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:25.701 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.701 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.701 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.701 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:25.701 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.701 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.701 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.701 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:25.701 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:25.701 22:27:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:25.701 00:03:25.701 real 0m0.223s 00:03:25.701 user 0m0.149s 00:03:25.701 sys 0m0.022s 00:03:25.701 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:25.701 22:27:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.701 ************************************ 00:03:25.701 END TEST rpc_integrity 00:03:25.701 ************************************ 00:03:25.701 22:27:09 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:25.701 22:27:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:25.701 22:27:09 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:25.701 22:27:09 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:25.701 22:27:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.701 ************************************ 00:03:25.701 START TEST rpc_plugins 00:03:25.701 ************************************ 00:03:25.701 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@1117 -- # rpc_plugins 00:03:25.701 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:25.701 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.701 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.701 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.701 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:25.701 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:25.701 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.701 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.701 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.701 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:25.701 { 00:03:25.701 "name": "Malloc1", 00:03:25.701 "aliases": [ 00:03:25.701 "6e3cc16f-4559-41ef-96df-5eec672be696" 00:03:25.701 ], 00:03:25.701 "product_name": "Malloc disk", 00:03:25.701 "block_size": 4096, 00:03:25.701 "num_blocks": 256, 00:03:25.701 "uuid": "6e3cc16f-4559-41ef-96df-5eec672be696", 00:03:25.701 "assigned_rate_limits": { 00:03:25.701 "rw_ios_per_sec": 0, 00:03:25.701 "rw_mbytes_per_sec": 0, 00:03:25.701 "r_mbytes_per_sec": 0, 00:03:25.701 "w_mbytes_per_sec": 0 00:03:25.701 }, 00:03:25.701 "claimed": false, 00:03:25.701 "zoned": false, 00:03:25.701 "supported_io_types": { 00:03:25.701 "read": true, 00:03:25.701 "write": true, 00:03:25.701 "unmap": true, 00:03:25.701 "flush": true, 00:03:25.701 "reset": true, 00:03:25.701 "nvme_admin": false, 00:03:25.701 "nvme_io": false, 00:03:25.701 "nvme_io_md": false, 00:03:25.701 "write_zeroes": true, 00:03:25.701 "zcopy": true, 00:03:25.701 "get_zone_info": false, 00:03:25.701 "zone_management": false, 00:03:25.701 "zone_append": false, 00:03:25.701 "compare": false, 00:03:25.701 "compare_and_write": false, 00:03:25.701 "abort": true, 00:03:25.701 "seek_hole": false, 00:03:25.701 "seek_data": false, 00:03:25.701 "copy": true, 00:03:25.701 "nvme_iov_md": false 00:03:25.701 }, 00:03:25.701 "memory_domains": [ 00:03:25.701 { 00:03:25.701 "dma_device_id": "system", 00:03:25.701 "dma_device_type": 1 00:03:25.701 }, 00:03:25.701 { 00:03:25.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.701 "dma_device_type": 2 00:03:25.701 } 00:03:25.701 ], 00:03:25.701 "driver_specific": {} 00:03:25.701 } 00:03:25.701 ]' 00:03:25.701 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:25.959 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:25.959 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:25.959 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.959 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.959 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.959 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:25.959 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.959 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.959 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.959 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:25.959 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:25.959 22:27:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:25.959 00:03:25.959 real 0m0.114s 00:03:25.959 user 0m0.076s 00:03:25.959 sys 0m0.010s 00:03:25.959 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:25.959 22:27:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.959 ************************************ 00:03:25.959 END TEST rpc_plugins 00:03:25.959 ************************************ 00:03:25.959 22:27:09 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:25.959 22:27:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:25.959 22:27:09 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:25.959 22:27:09 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:25.959 22:27:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.959 ************************************ 00:03:25.959 START TEST rpc_trace_cmd_test 00:03:25.959 ************************************ 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1117 -- # rpc_trace_cmd_test 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:25.959 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1131040", 00:03:25.959 "tpoint_group_mask": "0x8", 00:03:25.959 "iscsi_conn": { 00:03:25.959 "mask": "0x2", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "scsi": { 00:03:25.959 "mask": "0x4", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "bdev": { 00:03:25.959 "mask": "0x8", 00:03:25.959 "tpoint_mask": "0xffffffffffffffff" 00:03:25.959 }, 00:03:25.959 "nvmf_rdma": { 00:03:25.959 "mask": "0x10", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "nvmf_tcp": { 00:03:25.959 "mask": "0x20", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "ftl": { 00:03:25.959 "mask": "0x40", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "blobfs": { 00:03:25.959 "mask": "0x80", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "dsa": { 00:03:25.959 "mask": "0x200", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "thread": { 00:03:25.959 "mask": "0x400", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "nvme_pcie": { 00:03:25.959 "mask": "0x800", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "iaa": { 00:03:25.959 "mask": "0x1000", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "nvme_tcp": { 00:03:25.959 "mask": "0x2000", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "bdev_nvme": { 00:03:25.959 "mask": "0x4000", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 }, 00:03:25.959 "sock": { 00:03:25.959 "mask": "0x8000", 00:03:25.959 "tpoint_mask": "0x0" 00:03:25.959 } 00:03:25.959 }' 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:25.959 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:26.219 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:26.219 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:26.219 22:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:26.219 00:03:26.219 real 0m0.192s 00:03:26.219 user 0m0.173s 00:03:26.219 sys 0m0.013s 00:03:26.219 22:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:26.219 22:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:26.219 ************************************ 00:03:26.219 END TEST rpc_trace_cmd_test 00:03:26.219 ************************************ 00:03:26.219 22:27:09 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:26.219 22:27:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:26.219 22:27:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:26.219 22:27:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:26.219 22:27:09 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:26.219 22:27:09 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:26.220 22:27:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.220 ************************************ 00:03:26.220 START TEST rpc_daemon_integrity 00:03:26.220 ************************************ 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:26.220 { 00:03:26.220 "name": "Malloc2", 00:03:26.220 "aliases": [ 00:03:26.220 "810cdd27-b3ee-4e77-931a-664efdb5f073" 00:03:26.220 ], 00:03:26.220 "product_name": "Malloc disk", 00:03:26.220 "block_size": 512, 00:03:26.220 "num_blocks": 16384, 00:03:26.220 "uuid": "810cdd27-b3ee-4e77-931a-664efdb5f073", 00:03:26.220 "assigned_rate_limits": { 00:03:26.220 "rw_ios_per_sec": 0, 00:03:26.220 "rw_mbytes_per_sec": 0, 00:03:26.220 "r_mbytes_per_sec": 0, 00:03:26.220 "w_mbytes_per_sec": 0 00:03:26.220 }, 00:03:26.220 "claimed": false, 00:03:26.220 "zoned": false, 00:03:26.220 "supported_io_types": { 00:03:26.220 "read": true, 00:03:26.220 "write": true, 00:03:26.220 "unmap": true, 00:03:26.220 "flush": true, 00:03:26.220 "reset": true, 00:03:26.220 "nvme_admin": false, 00:03:26.220 "nvme_io": false, 00:03:26.220 "nvme_io_md": false, 00:03:26.220 "write_zeroes": true, 00:03:26.220 "zcopy": true, 00:03:26.220 "get_zone_info": false, 00:03:26.220 "zone_management": false, 00:03:26.220 "zone_append": false, 00:03:26.220 "compare": false, 00:03:26.220 "compare_and_write": false, 00:03:26.220 "abort": true, 00:03:26.220 "seek_hole": false, 00:03:26.220 "seek_data": false, 00:03:26.220 "copy": true, 00:03:26.220 "nvme_iov_md": false 00:03:26.220 }, 00:03:26.220 "memory_domains": [ 00:03:26.220 { 00:03:26.220 "dma_device_id": "system", 00:03:26.220 "dma_device_type": 1 00:03:26.220 }, 00:03:26.220 { 00:03:26.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:26.220 "dma_device_type": 2 00:03:26.220 } 00:03:26.220 ], 00:03:26.220 "driver_specific": {} 00:03:26.220 } 00:03:26.220 ]' 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.220 [2024-07-15 22:27:09.664206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:26.220 [2024-07-15 22:27:09.664264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:26.220 [2024-07-15 22:27:09.664293] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x952980 00:03:26.220 [2024-07-15 22:27:09.664310] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:26.220 [2024-07-15 22:27:09.665637] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:26.220 [2024-07-15 22:27:09.665665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:26.220 Passthru0 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:26.220 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:26.220 { 00:03:26.220 "name": "Malloc2", 00:03:26.220 "aliases": [ 00:03:26.220 "810cdd27-b3ee-4e77-931a-664efdb5f073" 00:03:26.220 ], 00:03:26.220 "product_name": "Malloc disk", 00:03:26.220 "block_size": 512, 00:03:26.220 "num_blocks": 16384, 00:03:26.220 "uuid": "810cdd27-b3ee-4e77-931a-664efdb5f073", 00:03:26.220 "assigned_rate_limits": { 00:03:26.220 "rw_ios_per_sec": 0, 00:03:26.220 "rw_mbytes_per_sec": 0, 00:03:26.220 "r_mbytes_per_sec": 0, 00:03:26.220 "w_mbytes_per_sec": 0 00:03:26.220 }, 00:03:26.220 "claimed": true, 00:03:26.220 "claim_type": "exclusive_write", 00:03:26.220 "zoned": false, 00:03:26.220 "supported_io_types": { 00:03:26.220 "read": true, 00:03:26.220 "write": true, 00:03:26.220 "unmap": true, 00:03:26.220 "flush": true, 00:03:26.220 "reset": true, 00:03:26.220 "nvme_admin": false, 00:03:26.220 "nvme_io": false, 00:03:26.220 "nvme_io_md": false, 00:03:26.220 "write_zeroes": true, 00:03:26.220 "zcopy": true, 00:03:26.220 "get_zone_info": false, 00:03:26.220 "zone_management": false, 00:03:26.220 "zone_append": false, 00:03:26.220 "compare": false, 00:03:26.220 "compare_and_write": false, 00:03:26.220 "abort": true, 00:03:26.220 "seek_hole": false, 00:03:26.220 "seek_data": false, 00:03:26.220 "copy": true, 00:03:26.220 "nvme_iov_md": false 00:03:26.220 }, 00:03:26.220 "memory_domains": [ 00:03:26.220 { 00:03:26.220 "dma_device_id": "system", 00:03:26.220 "dma_device_type": 1 00:03:26.220 }, 00:03:26.220 { 00:03:26.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:26.220 "dma_device_type": 2 00:03:26.220 } 00:03:26.220 ], 00:03:26.220 "driver_specific": {} 00:03:26.220 }, 00:03:26.220 { 00:03:26.220 "name": "Passthru0", 00:03:26.220 "aliases": [ 00:03:26.220 "0d2a150f-92b0-589b-9755-a1d38075aa97" 00:03:26.220 ], 00:03:26.220 "product_name": "passthru", 00:03:26.220 "block_size": 512, 00:03:26.220 "num_blocks": 16384, 00:03:26.220 "uuid": "0d2a150f-92b0-589b-9755-a1d38075aa97", 00:03:26.220 "assigned_rate_limits": { 00:03:26.220 "rw_ios_per_sec": 0, 00:03:26.220 "rw_mbytes_per_sec": 0, 00:03:26.220 "r_mbytes_per_sec": 0, 00:03:26.220 "w_mbytes_per_sec": 0 00:03:26.220 }, 00:03:26.220 "claimed": false, 00:03:26.220 "zoned": false, 00:03:26.220 "supported_io_types": { 00:03:26.220 "read": true, 00:03:26.220 "write": true, 00:03:26.220 "unmap": true, 00:03:26.220 "flush": true, 00:03:26.220 "reset": true, 00:03:26.220 "nvme_admin": false, 00:03:26.220 "nvme_io": false, 00:03:26.220 "nvme_io_md": false, 00:03:26.220 "write_zeroes": true, 00:03:26.220 "zcopy": true, 00:03:26.220 "get_zone_info": false, 00:03:26.220 "zone_management": false, 00:03:26.220 "zone_append": false, 00:03:26.221 "compare": false, 00:03:26.221 "compare_and_write": false, 00:03:26.221 "abort": true, 00:03:26.221 "seek_hole": false, 00:03:26.221 "seek_data": false, 00:03:26.221 "copy": true, 00:03:26.221 "nvme_iov_md": false 00:03:26.221 }, 00:03:26.221 "memory_domains": [ 00:03:26.221 { 00:03:26.221 "dma_device_id": "system", 00:03:26.221 "dma_device_type": 1 00:03:26.221 }, 00:03:26.221 { 00:03:26.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:26.221 "dma_device_type": 2 00:03:26.221 } 00:03:26.221 ], 00:03:26.221 "driver_specific": { 00:03:26.221 "passthru": { 00:03:26.221 "name": "Passthru0", 00:03:26.221 "base_bdev_name": "Malloc2" 00:03:26.221 } 00:03:26.221 } 00:03:26.221 } 00:03:26.221 ]' 00:03:26.221 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:26.479 00:03:26.479 real 0m0.227s 00:03:26.479 user 0m0.156s 00:03:26.479 sys 0m0.017s 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:26.479 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.479 ************************************ 00:03:26.479 END TEST rpc_daemon_integrity 00:03:26.479 ************************************ 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:26.479 22:27:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:26.479 22:27:09 rpc -- rpc/rpc.sh@84 -- # killprocess 1131040 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@942 -- # '[' -z 1131040 ']' 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@946 -- # kill -0 1131040 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@947 -- # uname 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1131040 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1131040' 00:03:26.479 killing process with pid 1131040 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@961 -- # kill 1131040 00:03:26.479 22:27:09 rpc -- common/autotest_common.sh@966 -- # wait 1131040 00:03:27.087 00:03:27.087 real 0m1.956s 00:03:27.087 user 0m2.449s 00:03:27.087 sys 0m0.568s 00:03:27.087 22:27:10 rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:27.087 22:27:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.087 ************************************ 00:03:27.087 END TEST rpc 00:03:27.087 ************************************ 00:03:27.087 22:27:10 -- common/autotest_common.sh@1136 -- # return 0 00:03:27.087 22:27:10 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:27.087 22:27:10 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:27.087 22:27:10 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:27.087 22:27:10 -- common/autotest_common.sh@10 -- # set +x 00:03:27.087 ************************************ 00:03:27.087 START TEST skip_rpc 00:03:27.087 ************************************ 00:03:27.087 22:27:10 skip_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:27.087 * Looking for test storage... 00:03:27.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.087 22:27:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:27.087 22:27:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:27.087 22:27:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:27.087 22:27:10 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:27.087 22:27:10 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:27.087 22:27:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.087 ************************************ 00:03:27.087 START TEST skip_rpc 00:03:27.087 ************************************ 00:03:27.087 22:27:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1117 -- # test_skip_rpc 00:03:27.087 22:27:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1131556 00:03:27.087 22:27:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:27.087 22:27:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:27.087 22:27:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:27.087 [2024-07-15 22:27:10.473992] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:03:27.087 [2024-07-15 22:27:10.474068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131556 ] 00:03:27.087 [2024-07-15 22:27:10.533916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.346 [2024-07-15 22:27:10.655749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # local es=0 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # rpc_cmd spdk_get_version 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # es=1 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1131556 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@942 -- # '[' -z 1131556 ']' 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # kill -0 1131556 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # uname 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1131556 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1131556' 00:03:32.615 killing process with pid 1131556 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@961 -- # kill 1131556 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # wait 1131556 00:03:32.615 00:03:32.615 real 0m5.492s 00:03:32.615 user 0m5.175s 00:03:32.615 sys 0m0.316s 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:32.615 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.615 ************************************ 00:03:32.615 END TEST skip_rpc 00:03:32.615 ************************************ 00:03:32.615 22:27:15 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:32.615 22:27:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:32.615 22:27:15 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:32.615 22:27:15 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:32.615 22:27:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.615 ************************************ 00:03:32.615 START TEST skip_rpc_with_json 00:03:32.615 ************************************ 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_json 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1132252 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1132252 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@823 -- # '[' -z 1132252 ']' 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:32.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:32.615 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.615 [2024-07-15 22:27:16.015501] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:03:32.615 [2024-07-15 22:27:16.015605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132252 ] 00:03:32.615 [2024-07-15 22:27:16.073213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.874 [2024-07-15 22:27:16.183574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # return 0 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:33.133 [2024-07-15 22:27:16.444256] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:33.133 request: 00:03:33.133 { 00:03:33.133 "trtype": "tcp", 00:03:33.133 "method": "nvmf_get_transports", 00:03:33.133 "req_id": 1 00:03:33.133 } 00:03:33.133 Got JSON-RPC error response 00:03:33.133 response: 00:03:33.133 { 00:03:33.133 "code": -19, 00:03:33.133 "message": "No such device" 00:03:33.133 } 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:33.133 [2024-07-15 22:27:16.452393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:33.133 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:33.133 { 00:03:33.133 "subsystems": [ 00:03:33.133 { 00:03:33.133 "subsystem": "vfio_user_target", 00:03:33.133 "config": null 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "keyring", 00:03:33.133 "config": [] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "iobuf", 00:03:33.133 "config": [ 00:03:33.133 { 00:03:33.133 "method": "iobuf_set_options", 00:03:33.133 "params": { 00:03:33.133 "small_pool_count": 8192, 00:03:33.133 "large_pool_count": 1024, 00:03:33.133 "small_bufsize": 8192, 00:03:33.133 "large_bufsize": 135168 00:03:33.133 } 00:03:33.133 } 00:03:33.133 ] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "sock", 00:03:33.133 "config": [ 00:03:33.133 { 00:03:33.133 "method": "sock_set_default_impl", 00:03:33.133 "params": { 00:03:33.133 "impl_name": "posix" 00:03:33.133 } 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "method": "sock_impl_set_options", 00:03:33.133 "params": { 00:03:33.133 "impl_name": "ssl", 00:03:33.133 "recv_buf_size": 4096, 00:03:33.133 "send_buf_size": 4096, 00:03:33.133 "enable_recv_pipe": true, 00:03:33.133 "enable_quickack": false, 00:03:33.133 "enable_placement_id": 0, 00:03:33.133 "enable_zerocopy_send_server": true, 00:03:33.133 "enable_zerocopy_send_client": false, 00:03:33.133 "zerocopy_threshold": 0, 00:03:33.133 "tls_version": 0, 00:03:33.133 "enable_ktls": false 00:03:33.133 } 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "method": "sock_impl_set_options", 00:03:33.133 "params": { 00:03:33.133 "impl_name": "posix", 00:03:33.133 "recv_buf_size": 2097152, 00:03:33.133 "send_buf_size": 2097152, 00:03:33.133 "enable_recv_pipe": true, 00:03:33.133 "enable_quickack": false, 00:03:33.133 "enable_placement_id": 0, 00:03:33.133 "enable_zerocopy_send_server": true, 00:03:33.133 "enable_zerocopy_send_client": false, 00:03:33.133 "zerocopy_threshold": 0, 00:03:33.133 "tls_version": 0, 00:03:33.133 "enable_ktls": false 00:03:33.133 } 00:03:33.133 } 00:03:33.133 ] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "vmd", 00:03:33.133 "config": [] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "accel", 00:03:33.133 "config": [ 00:03:33.133 { 00:03:33.133 "method": "accel_set_options", 00:03:33.133 "params": { 00:03:33.133 "small_cache_size": 128, 00:03:33.133 "large_cache_size": 16, 00:03:33.133 "task_count": 2048, 00:03:33.133 "sequence_count": 2048, 00:03:33.133 "buf_count": 2048 00:03:33.133 } 00:03:33.133 } 00:03:33.133 ] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "bdev", 00:03:33.133 "config": [ 00:03:33.133 { 00:03:33.133 "method": "bdev_set_options", 00:03:33.133 "params": { 00:03:33.133 "bdev_io_pool_size": 65535, 00:03:33.133 "bdev_io_cache_size": 256, 00:03:33.133 "bdev_auto_examine": true, 00:03:33.133 "iobuf_small_cache_size": 128, 00:03:33.133 "iobuf_large_cache_size": 16 00:03:33.133 } 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "method": "bdev_raid_set_options", 00:03:33.133 "params": { 00:03:33.133 "process_window_size_kb": 1024 00:03:33.133 } 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "method": "bdev_iscsi_set_options", 00:03:33.133 "params": { 00:03:33.133 "timeout_sec": 30 00:03:33.133 } 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "method": "bdev_nvme_set_options", 00:03:33.133 "params": { 00:03:33.133 "action_on_timeout": "none", 00:03:33.133 "timeout_us": 0, 00:03:33.133 "timeout_admin_us": 0, 00:03:33.133 "keep_alive_timeout_ms": 10000, 00:03:33.133 "arbitration_burst": 0, 00:03:33.133 "low_priority_weight": 0, 00:03:33.133 "medium_priority_weight": 0, 00:03:33.133 "high_priority_weight": 0, 00:03:33.133 "nvme_adminq_poll_period_us": 10000, 00:03:33.133 "nvme_ioq_poll_period_us": 0, 00:03:33.133 "io_queue_requests": 0, 00:03:33.133 "delay_cmd_submit": true, 00:03:33.133 "transport_retry_count": 4, 00:03:33.133 "bdev_retry_count": 3, 00:03:33.133 "transport_ack_timeout": 0, 00:03:33.133 "ctrlr_loss_timeout_sec": 0, 00:03:33.133 "reconnect_delay_sec": 0, 00:03:33.133 "fast_io_fail_timeout_sec": 0, 00:03:33.133 "disable_auto_failback": false, 00:03:33.133 "generate_uuids": false, 00:03:33.133 "transport_tos": 0, 00:03:33.133 "nvme_error_stat": false, 00:03:33.133 "rdma_srq_size": 0, 00:03:33.133 "io_path_stat": false, 00:03:33.133 "allow_accel_sequence": false, 00:03:33.133 "rdma_max_cq_size": 0, 00:03:33.133 "rdma_cm_event_timeout_ms": 0, 00:03:33.133 "dhchap_digests": [ 00:03:33.133 "sha256", 00:03:33.133 "sha384", 00:03:33.133 "sha512" 00:03:33.133 ], 00:03:33.133 "dhchap_dhgroups": [ 00:03:33.133 "null", 00:03:33.133 "ffdhe2048", 00:03:33.133 "ffdhe3072", 00:03:33.133 "ffdhe4096", 00:03:33.133 "ffdhe6144", 00:03:33.133 "ffdhe8192" 00:03:33.133 ] 00:03:33.133 } 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "method": "bdev_nvme_set_hotplug", 00:03:33.133 "params": { 00:03:33.133 "period_us": 100000, 00:03:33.133 "enable": false 00:03:33.133 } 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "method": "bdev_wait_for_examine" 00:03:33.133 } 00:03:33.133 ] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "scsi", 00:03:33.133 "config": null 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "scheduler", 00:03:33.133 "config": [ 00:03:33.133 { 00:03:33.133 "method": "framework_set_scheduler", 00:03:33.133 "params": { 00:03:33.133 "name": "static" 00:03:33.133 } 00:03:33.133 } 00:03:33.133 ] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "vhost_scsi", 00:03:33.133 "config": [] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "vhost_blk", 00:03:33.133 "config": [] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "ublk", 00:03:33.133 "config": [] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "nbd", 00:03:33.133 "config": [] 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "subsystem": "nvmf", 00:03:33.133 "config": [ 00:03:33.133 { 00:03:33.133 "method": "nvmf_set_config", 00:03:33.133 "params": { 00:03:33.133 "discovery_filter": "match_any", 00:03:33.133 "admin_cmd_passthru": { 00:03:33.133 "identify_ctrlr": false 00:03:33.133 } 00:03:33.133 } 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "method": "nvmf_set_max_subsystems", 00:03:33.133 "params": { 00:03:33.133 "max_subsystems": 1024 00:03:33.133 } 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "method": "nvmf_set_crdt", 00:03:33.133 "params": { 00:03:33.133 "crdt1": 0, 00:03:33.133 "crdt2": 0, 00:03:33.133 "crdt3": 0 00:03:33.133 } 00:03:33.133 }, 00:03:33.133 { 00:03:33.133 "method": "nvmf_create_transport", 00:03:33.133 "params": { 00:03:33.133 "trtype": "TCP", 00:03:33.133 "max_queue_depth": 128, 00:03:33.133 "max_io_qpairs_per_ctrlr": 127, 00:03:33.134 "in_capsule_data_size": 4096, 00:03:33.134 "max_io_size": 131072, 00:03:33.134 "io_unit_size": 131072, 00:03:33.134 "max_aq_depth": 128, 00:03:33.134 "num_shared_buffers": 511, 00:03:33.134 "buf_cache_size": 4294967295, 00:03:33.134 "dif_insert_or_strip": false, 00:03:33.134 "zcopy": false, 00:03:33.134 "c2h_success": true, 00:03:33.134 "sock_priority": 0, 00:03:33.134 "abort_timeout_sec": 1, 00:03:33.134 "ack_timeout": 0, 00:03:33.134 "data_wr_pool_size": 0 00:03:33.134 } 00:03:33.134 } 00:03:33.134 ] 00:03:33.134 }, 00:03:33.134 { 00:03:33.134 "subsystem": "iscsi", 00:03:33.134 "config": [ 00:03:33.134 { 00:03:33.134 "method": "iscsi_set_options", 00:03:33.134 "params": { 00:03:33.134 "node_base": "iqn.2016-06.io.spdk", 00:03:33.134 "max_sessions": 128, 00:03:33.134 "max_connections_per_session": 2, 00:03:33.134 "max_queue_depth": 64, 00:03:33.134 "default_time2wait": 2, 00:03:33.134 "default_time2retain": 20, 00:03:33.134 "first_burst_length": 8192, 00:03:33.134 "immediate_data": true, 00:03:33.134 "allow_duplicated_isid": false, 00:03:33.134 "error_recovery_level": 0, 00:03:33.134 "nop_timeout": 60, 00:03:33.134 "nop_in_interval": 30, 00:03:33.134 "disable_chap": false, 00:03:33.134 "require_chap": false, 00:03:33.134 "mutual_chap": false, 00:03:33.134 "chap_group": 0, 00:03:33.134 "max_large_datain_per_connection": 64, 00:03:33.134 "max_r2t_per_connection": 4, 00:03:33.134 "pdu_pool_size": 36864, 00:03:33.134 "immediate_data_pool_size": 16384, 00:03:33.134 "data_out_pool_size": 2048 00:03:33.134 } 00:03:33.134 } 00:03:33.134 ] 00:03:33.134 } 00:03:33.134 ] 00:03:33.134 } 00:03:33.134 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:33.134 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1132252 00:03:33.134 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 1132252 ']' 00:03:33.134 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 1132252 00:03:33.134 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:03:33.134 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:03:33.134 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1132252 00:03:33.394 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:03:33.394 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:03:33.394 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1132252' 00:03:33.394 killing process with pid 1132252 00:03:33.394 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 1132252 00:03:33.394 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 1132252 00:03:33.654 22:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1132394 00:03:33.654 22:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:33.654 22:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1132394 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 1132394 ']' 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 1132394 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1132394 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1132394' 00:03:38.929 killing process with pid 1132394 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 1132394 00:03:38.929 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 1132394 00:03:39.188 22:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:39.188 22:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:39.188 00:03:39.188 real 0m6.620s 00:03:39.188 user 0m6.216s 00:03:39.188 sys 0m0.689s 00:03:39.188 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:39.188 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.188 ************************************ 00:03:39.188 END TEST skip_rpc_with_json 00:03:39.188 ************************************ 00:03:39.188 22:27:22 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:39.188 22:27:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:39.188 22:27:22 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:39.188 22:27:22 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:39.188 22:27:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.188 ************************************ 00:03:39.188 START TEST skip_rpc_with_delay 00:03:39.188 ************************************ 00:03:39.188 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_delay 00:03:39.188 22:27:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:39.188 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # local es=0 00:03:39.189 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:39.189 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.189 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:03:39.189 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.189 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:03:39.189 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.189 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:03:39.189 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.189 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:39.189 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:39.189 [2024-07-15 22:27:22.677355] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:39.189 [2024-07-15 22:27:22.677478] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:39.448 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # es=1 00:03:39.448 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:03:39.448 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:03:39.448 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:03:39.448 00:03:39.448 real 0m0.065s 00:03:39.448 user 0m0.045s 00:03:39.448 sys 0m0.019s 00:03:39.448 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:39.448 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:39.448 ************************************ 00:03:39.448 END TEST skip_rpc_with_delay 00:03:39.448 ************************************ 00:03:39.448 22:27:22 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:39.448 22:27:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:39.448 22:27:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:39.448 22:27:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:39.448 22:27:22 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:39.448 22:27:22 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:39.448 22:27:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.448 ************************************ 00:03:39.448 START TEST exit_on_failed_rpc_init 00:03:39.448 ************************************ 00:03:39.448 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1117 -- # test_exit_on_failed_rpc_init 00:03:39.448 22:27:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1133107 00:03:39.448 22:27:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:39.448 22:27:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1133107 00:03:39.448 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@823 -- # '[' -z 1133107 ']' 00:03:39.448 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.448 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:39.448 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.448 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:39.448 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:39.448 [2024-07-15 22:27:22.785790] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:03:39.448 [2024-07-15 22:27:22.785902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133107 ] 00:03:39.448 [2024-07-15 22:27:22.843118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.705 [2024-07-15 22:27:22.953302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # return 0 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # local es=0 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:39.964 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:39.964 [2024-07-15 22:27:23.268951] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:03:39.964 [2024-07-15 22:27:23.269033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133239 ] 00:03:39.964 [2024-07-15 22:27:23.329787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.964 [2024-07-15 22:27:23.446507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:39.964 [2024-07-15 22:27:23.446636] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:39.964 [2024-07-15 22:27:23.446670] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:39.964 [2024-07-15 22:27:23.446681] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # es=234 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # es=106 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # case "$es" in 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=1 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1133107 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@942 -- # '[' -z 1133107 ']' 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # kill -0 1133107 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # uname 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1133107 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1133107' 00:03:40.223 killing process with pid 1133107 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@961 -- # kill 1133107 00:03:40.223 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # wait 1133107 00:03:40.791 00:03:40.791 real 0m1.342s 00:03:40.791 user 0m1.520s 00:03:40.791 sys 0m0.444s 00:03:40.791 22:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:40.791 22:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:40.791 ************************************ 00:03:40.791 END TEST exit_on_failed_rpc_init 00:03:40.791 ************************************ 00:03:40.791 22:27:24 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:40.791 22:27:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:40.791 00:03:40.791 real 0m13.753s 00:03:40.791 user 0m13.053s 00:03:40.791 sys 0m1.621s 00:03:40.791 22:27:24 skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:40.791 22:27:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.791 ************************************ 00:03:40.791 END TEST skip_rpc 00:03:40.791 ************************************ 00:03:40.791 22:27:24 -- common/autotest_common.sh@1136 -- # return 0 00:03:40.791 22:27:24 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:40.791 22:27:24 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:40.791 22:27:24 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:40.791 22:27:24 -- common/autotest_common.sh@10 -- # set +x 00:03:40.791 ************************************ 00:03:40.791 START TEST rpc_client 00:03:40.791 ************************************ 00:03:40.791 22:27:24 rpc_client -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:40.791 * Looking for test storage... 00:03:40.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:40.791 22:27:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:40.791 OK 00:03:40.791 22:27:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:40.791 00:03:40.791 real 0m0.067s 00:03:40.791 user 0m0.027s 00:03:40.791 sys 0m0.045s 00:03:40.791 22:27:24 rpc_client -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:40.791 22:27:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:40.791 ************************************ 00:03:40.791 END TEST rpc_client 00:03:40.791 ************************************ 00:03:40.791 22:27:24 -- common/autotest_common.sh@1136 -- # return 0 00:03:40.792 22:27:24 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:40.792 22:27:24 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:40.792 22:27:24 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:40.792 22:27:24 -- common/autotest_common.sh@10 -- # set +x 00:03:40.792 ************************************ 00:03:40.792 START TEST json_config 00:03:40.792 ************************************ 00:03:40.792 22:27:24 json_config -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:41.050 22:27:24 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:41.050 22:27:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:41.050 22:27:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.050 22:27:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.050 22:27:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:41.051 22:27:24 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.051 22:27:24 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.051 22:27:24 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.051 22:27:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.051 22:27:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.051 22:27:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.051 22:27:24 json_config -- paths/export.sh@5 -- # export PATH 00:03:41.051 22:27:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@47 -- # : 0 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:41.051 22:27:24 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:41.051 INFO: JSON configuration test init 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:41.051 22:27:24 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:41.051 22:27:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:41.051 22:27:24 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:41.051 22:27:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.051 22:27:24 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:41.051 22:27:24 json_config -- json_config/common.sh@9 -- # local app=target 00:03:41.051 22:27:24 json_config -- json_config/common.sh@10 -- # shift 00:03:41.051 22:27:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:41.051 22:27:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:41.051 22:27:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:41.051 22:27:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:41.051 22:27:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:41.051 22:27:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1133480 00:03:41.051 22:27:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:41.051 22:27:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:41.051 Waiting for target to run... 00:03:41.051 22:27:24 json_config -- json_config/common.sh@25 -- # waitforlisten 1133480 /var/tmp/spdk_tgt.sock 00:03:41.051 22:27:24 json_config -- common/autotest_common.sh@823 -- # '[' -z 1133480 ']' 00:03:41.051 22:27:24 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:41.051 22:27:24 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:41.051 22:27:24 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:41.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:41.051 22:27:24 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:41.051 22:27:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.051 [2024-07-15 22:27:24.368799] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:03:41.051 [2024-07-15 22:27:24.368907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133480 ] 00:03:41.310 [2024-07-15 22:27:24.708026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.310 [2024-07-15 22:27:24.796601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.877 22:27:25 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:03:41.877 22:27:25 json_config -- common/autotest_common.sh@856 -- # return 0 00:03:41.877 22:27:25 json_config -- json_config/common.sh@26 -- # echo '' 00:03:41.877 00:03:41.877 22:27:25 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:41.877 22:27:25 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:41.877 22:27:25 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:41.877 22:27:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.877 22:27:25 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:41.877 22:27:25 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:41.877 22:27:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:41.877 22:27:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.877 22:27:25 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:41.877 22:27:25 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:41.877 22:27:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:45.164 22:27:28 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:45.164 22:27:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:45.164 22:27:28 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:45.164 22:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.164 22:27:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:45.164 22:27:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:45.164 22:27:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:45.164 22:27:28 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:45.164 22:27:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:45.164 22:27:28 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:45.423 22:27:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.423 22:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:45.423 22:27:28 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:45.423 22:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:45.423 22:27:28 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:45.423 22:27:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:45.680 MallocForNvmf0 00:03:45.680 22:27:29 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:45.680 22:27:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:45.938 MallocForNvmf1 00:03:45.938 22:27:29 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:45.938 22:27:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:46.195 [2024-07-15 22:27:29.495977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:46.195 22:27:29 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:46.195 22:27:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:46.452 22:27:29 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:46.452 22:27:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:46.710 22:27:30 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:46.710 22:27:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:46.967 22:27:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:46.967 22:27:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:47.225 [2024-07-15 22:27:30.475152] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:47.225 22:27:30 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:47.225 22:27:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.225 22:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.225 22:27:30 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:47.225 22:27:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.225 22:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.225 22:27:30 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:47.225 22:27:30 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:47.225 22:27:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:47.484 MallocBdevForConfigChangeCheck 00:03:47.484 22:27:30 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:47.484 22:27:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.484 22:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.484 22:27:30 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:47.484 22:27:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:47.742 22:27:31 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:47.742 INFO: shutting down applications... 00:03:47.742 22:27:31 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:47.742 22:27:31 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:47.742 22:27:31 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:47.742 22:27:31 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:49.693 Calling clear_iscsi_subsystem 00:03:49.693 Calling clear_nvmf_subsystem 00:03:49.693 Calling clear_nbd_subsystem 00:03:49.693 Calling clear_ublk_subsystem 00:03:49.693 Calling clear_vhost_blk_subsystem 00:03:49.693 Calling clear_vhost_scsi_subsystem 00:03:49.693 Calling clear_bdev_subsystem 00:03:49.693 22:27:32 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:49.693 22:27:32 json_config -- json_config/json_config.sh@343 -- # count=100 00:03:49.693 22:27:32 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:49.693 22:27:32 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:49.693 22:27:32 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:49.693 22:27:32 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:49.950 22:27:33 json_config -- json_config/json_config.sh@345 -- # break 00:03:49.950 22:27:33 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:49.950 22:27:33 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:49.950 22:27:33 json_config -- json_config/common.sh@31 -- # local app=target 00:03:49.950 22:27:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:49.950 22:27:33 json_config -- json_config/common.sh@35 -- # [[ -n 1133480 ]] 00:03:49.950 22:27:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1133480 00:03:49.950 22:27:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:49.950 22:27:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:49.950 22:27:33 json_config -- json_config/common.sh@41 -- # kill -0 1133480 00:03:49.950 22:27:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:50.518 22:27:33 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:50.518 22:27:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:50.518 22:27:33 json_config -- json_config/common.sh@41 -- # kill -0 1133480 00:03:50.518 22:27:33 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:50.518 22:27:33 json_config -- json_config/common.sh@43 -- # break 00:03:50.519 22:27:33 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:50.519 22:27:33 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:50.519 SPDK target shutdown done 00:03:50.519 22:27:33 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:50.519 INFO: relaunching applications... 00:03:50.519 22:27:33 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:50.519 22:27:33 json_config -- json_config/common.sh@9 -- # local app=target 00:03:50.519 22:27:33 json_config -- json_config/common.sh@10 -- # shift 00:03:50.519 22:27:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:50.519 22:27:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:50.519 22:27:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:50.519 22:27:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.519 22:27:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.519 22:27:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1134675 00:03:50.519 22:27:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:50.519 22:27:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:50.519 Waiting for target to run... 00:03:50.519 22:27:33 json_config -- json_config/common.sh@25 -- # waitforlisten 1134675 /var/tmp/spdk_tgt.sock 00:03:50.519 22:27:33 json_config -- common/autotest_common.sh@823 -- # '[' -z 1134675 ']' 00:03:50.519 22:27:33 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:50.519 22:27:33 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:50.519 22:27:33 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:50.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:50.519 22:27:33 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:50.519 22:27:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.519 [2024-07-15 22:27:33.769511] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:03:50.519 [2024-07-15 22:27:33.769604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134675 ] 00:03:51.084 [2024-07-15 22:27:34.297258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.084 [2024-07-15 22:27:34.400697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.363 [2024-07-15 22:27:37.448559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.363 [2024-07-15 22:27:37.481005] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:54.929 22:27:38 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:03:54.929 22:27:38 json_config -- common/autotest_common.sh@856 -- # return 0 00:03:54.929 22:27:38 json_config -- json_config/common.sh@26 -- # echo '' 00:03:54.929 00:03:54.929 22:27:38 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:54.929 22:27:38 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:54.929 INFO: Checking if target configuration is the same... 00:03:54.929 22:27:38 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.929 22:27:38 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:54.929 22:27:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:54.929 + '[' 2 -ne 2 ']' 00:03:54.929 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:54.929 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:54.929 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:54.929 +++ basename /dev/fd/62 00:03:54.929 ++ mktemp /tmp/62.XXX 00:03:54.929 + tmp_file_1=/tmp/62.eVU 00:03:54.929 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.929 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:54.929 + tmp_file_2=/tmp/spdk_tgt_config.json.3Mb 00:03:54.929 + ret=0 00:03:54.929 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:55.188 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:55.188 + diff -u /tmp/62.eVU /tmp/spdk_tgt_config.json.3Mb 00:03:55.188 + echo 'INFO: JSON config files are the same' 00:03:55.188 INFO: JSON config files are the same 00:03:55.188 + rm /tmp/62.eVU /tmp/spdk_tgt_config.json.3Mb 00:03:55.188 + exit 0 00:03:55.188 22:27:38 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:55.188 22:27:38 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:55.188 INFO: changing configuration and checking if this can be detected... 00:03:55.188 22:27:38 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:55.188 22:27:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:55.446 22:27:38 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.446 22:27:38 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:55.446 22:27:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.446 + '[' 2 -ne 2 ']' 00:03:55.446 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:55.446 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:55.446 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.446 +++ basename /dev/fd/62 00:03:55.446 ++ mktemp /tmp/62.XXX 00:03:55.446 + tmp_file_1=/tmp/62.t47 00:03:55.446 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.446 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.446 + tmp_file_2=/tmp/spdk_tgt_config.json.zcK 00:03:55.446 + ret=0 00:03:55.446 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.013 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.013 + diff -u /tmp/62.t47 /tmp/spdk_tgt_config.json.zcK 00:03:56.013 + ret=1 00:03:56.013 + echo '=== Start of file: /tmp/62.t47 ===' 00:03:56.013 + cat /tmp/62.t47 00:03:56.013 + echo '=== End of file: /tmp/62.t47 ===' 00:03:56.013 + echo '' 00:03:56.013 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zcK ===' 00:03:56.013 + cat /tmp/spdk_tgt_config.json.zcK 00:03:56.013 + echo '=== End of file: /tmp/spdk_tgt_config.json.zcK ===' 00:03:56.013 + echo '' 00:03:56.013 + rm /tmp/62.t47 /tmp/spdk_tgt_config.json.zcK 00:03:56.013 + exit 1 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:03:56.013 INFO: configuration change detected. 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@317 -- # [[ -n 1134675 ]] 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@193 -- # uname -s 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.013 22:27:39 json_config -- json_config/json_config.sh@323 -- # killprocess 1134675 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@942 -- # '[' -z 1134675 ']' 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@946 -- # kill -0 1134675 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@947 -- # uname 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1134675 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1134675' 00:03:56.013 killing process with pid 1134675 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@961 -- # kill 1134675 00:03:56.013 22:27:39 json_config -- common/autotest_common.sh@966 -- # wait 1134675 00:03:57.914 22:27:40 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.914 22:27:40 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:03:57.914 22:27:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:57.914 22:27:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.914 22:27:41 json_config -- json_config/json_config.sh@328 -- # return 0 00:03:57.914 22:27:41 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:03:57.914 INFO: Success 00:03:57.914 00:03:57.914 real 0m16.756s 00:03:57.914 user 0m18.730s 00:03:57.914 sys 0m2.051s 00:03:57.914 22:27:41 json_config -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:57.914 22:27:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.914 ************************************ 00:03:57.914 END TEST json_config 00:03:57.914 ************************************ 00:03:57.914 22:27:41 -- common/autotest_common.sh@1136 -- # return 0 00:03:57.914 22:27:41 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:57.914 22:27:41 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:57.914 22:27:41 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:57.915 22:27:41 -- common/autotest_common.sh@10 -- # set +x 00:03:57.915 ************************************ 00:03:57.915 START TEST json_config_extra_key 00:03:57.915 ************************************ 00:03:57.915 22:27:41 json_config_extra_key -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.915 22:27:41 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.915 22:27:41 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.915 22:27:41 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.915 22:27:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.915 22:27:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.915 22:27:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.915 22:27:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:57.915 22:27:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:57.915 22:27:41 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:57.915 INFO: launching applications... 00:03:57.915 22:27:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1135646 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:57.915 Waiting for target to run... 00:03:57.915 22:27:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1135646 /var/tmp/spdk_tgt.sock 00:03:57.915 22:27:41 json_config_extra_key -- common/autotest_common.sh@823 -- # '[' -z 1135646 ']' 00:03:57.915 22:27:41 json_config_extra_key -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.915 22:27:41 json_config_extra_key -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:57.915 22:27:41 json_config_extra_key -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.915 22:27:41 json_config_extra_key -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:57.915 22:27:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:57.915 [2024-07-15 22:27:41.171387] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:03:57.915 [2024-07-15 22:27:41.171495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135646 ] 00:03:58.173 [2024-07-15 22:27:41.507529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.173 [2024-07-15 22:27:41.596035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.737 22:27:42 json_config_extra_key -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:03:58.737 22:27:42 json_config_extra_key -- common/autotest_common.sh@856 -- # return 0 00:03:58.737 22:27:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:58.737 00:03:58.737 22:27:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:58.737 INFO: shutting down applications... 00:03:58.737 22:27:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:58.737 22:27:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:58.737 22:27:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:58.737 22:27:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1135646 ]] 00:03:58.737 22:27:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1135646 00:03:58.737 22:27:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:58.737 22:27:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.737 22:27:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1135646 00:03:58.737 22:27:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:59.301 22:27:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:59.301 22:27:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.301 22:27:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1135646 00:03:59.301 22:27:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:59.865 22:27:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:59.865 22:27:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.865 22:27:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1135646 00:03:59.865 22:27:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:59.865 22:27:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:59.865 22:27:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:59.865 22:27:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:59.865 SPDK target shutdown done 00:03:59.865 22:27:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:59.865 Success 00:03:59.865 00:03:59.866 real 0m2.048s 00:03:59.866 user 0m1.583s 00:03:59.866 sys 0m0.435s 00:03:59.866 22:27:43 json_config_extra_key -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:59.866 22:27:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:59.866 ************************************ 00:03:59.866 END TEST json_config_extra_key 00:03:59.866 ************************************ 00:03:59.866 22:27:43 -- common/autotest_common.sh@1136 -- # return 0 00:03:59.866 22:27:43 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:59.866 22:27:43 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:59.866 22:27:43 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:59.866 22:27:43 -- common/autotest_common.sh@10 -- # set +x 00:03:59.866 ************************************ 00:03:59.866 START TEST alias_rpc 00:03:59.866 ************************************ 00:03:59.866 22:27:43 alias_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:59.866 * Looking for test storage... 00:03:59.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:59.866 22:27:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:59.866 22:27:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1135912 00:03:59.866 22:27:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.866 22:27:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1135912 00:03:59.866 22:27:43 alias_rpc -- common/autotest_common.sh@823 -- # '[' -z 1135912 ']' 00:03:59.866 22:27:43 alias_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.866 22:27:43 alias_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:59.866 22:27:43 alias_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.866 22:27:43 alias_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:59.866 22:27:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.866 [2024-07-15 22:27:43.267029] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:03:59.866 [2024-07-15 22:27:43.267113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135912 ] 00:03:59.866 [2024-07-15 22:27:43.323463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.122 [2024-07-15 22:27:43.431691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.379 22:27:43 alias_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:00.379 22:27:43 alias_rpc -- common/autotest_common.sh@856 -- # return 0 00:04:00.379 22:27:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:00.636 22:27:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1135912 00:04:00.636 22:27:43 alias_rpc -- common/autotest_common.sh@942 -- # '[' -z 1135912 ']' 00:04:00.636 22:27:43 alias_rpc -- common/autotest_common.sh@946 -- # kill -0 1135912 00:04:00.636 22:27:43 alias_rpc -- common/autotest_common.sh@947 -- # uname 00:04:00.636 22:27:43 alias_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:00.636 22:27:43 alias_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1135912 00:04:00.636 22:27:43 alias_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:00.636 22:27:43 alias_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:00.636 22:27:43 alias_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1135912' 00:04:00.636 killing process with pid 1135912 00:04:00.636 22:27:43 alias_rpc -- common/autotest_common.sh@961 -- # kill 1135912 00:04:00.636 22:27:43 alias_rpc -- common/autotest_common.sh@966 -- # wait 1135912 00:04:01.200 00:04:01.200 real 0m1.274s 00:04:01.200 user 0m1.342s 00:04:01.200 sys 0m0.430s 00:04:01.200 22:27:44 alias_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:01.200 22:27:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.200 ************************************ 00:04:01.200 END TEST alias_rpc 00:04:01.200 ************************************ 00:04:01.200 22:27:44 -- common/autotest_common.sh@1136 -- # return 0 00:04:01.200 22:27:44 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:01.200 22:27:44 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.200 22:27:44 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:01.200 22:27:44 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:01.200 22:27:44 -- common/autotest_common.sh@10 -- # set +x 00:04:01.200 ************************************ 00:04:01.200 START TEST spdkcli_tcp 00:04:01.200 ************************************ 00:04:01.200 22:27:44 spdkcli_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.200 * Looking for test storage... 00:04:01.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:01.200 22:27:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:01.200 22:27:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:01.200 22:27:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:01.200 22:27:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:01.201 22:27:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:01.201 22:27:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:01.201 22:27:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:01.201 22:27:44 spdkcli_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:01.201 22:27:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.201 22:27:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1136197 00:04:01.201 22:27:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:01.201 22:27:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1136197 00:04:01.201 22:27:44 spdkcli_tcp -- common/autotest_common.sh@823 -- # '[' -z 1136197 ']' 00:04:01.201 22:27:44 spdkcli_tcp -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.201 22:27:44 spdkcli_tcp -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:01.201 22:27:44 spdkcli_tcp -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.201 22:27:44 spdkcli_tcp -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:01.201 22:27:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.201 [2024-07-15 22:27:44.587698] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:01.201 [2024-07-15 22:27:44.587804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136197 ] 00:04:01.201 [2024-07-15 22:27:44.644015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:01.459 [2024-07-15 22:27:44.750396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.459 [2024-07-15 22:27:44.750400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.717 22:27:45 spdkcli_tcp -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:01.717 22:27:45 spdkcli_tcp -- common/autotest_common.sh@856 -- # return 0 00:04:01.717 22:27:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1136224 00:04:01.717 22:27:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:01.717 22:27:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:01.976 [ 00:04:01.976 "bdev_malloc_delete", 00:04:01.976 "bdev_malloc_create", 00:04:01.976 "bdev_null_resize", 00:04:01.976 "bdev_null_delete", 00:04:01.976 "bdev_null_create", 00:04:01.976 "bdev_nvme_cuse_unregister", 00:04:01.976 "bdev_nvme_cuse_register", 00:04:01.976 "bdev_opal_new_user", 00:04:01.976 "bdev_opal_set_lock_state", 00:04:01.976 "bdev_opal_delete", 00:04:01.976 "bdev_opal_get_info", 00:04:01.976 "bdev_opal_create", 00:04:01.976 "bdev_nvme_opal_revert", 00:04:01.976 "bdev_nvme_opal_init", 00:04:01.976 "bdev_nvme_send_cmd", 00:04:01.976 "bdev_nvme_get_path_iostat", 00:04:01.976 "bdev_nvme_get_mdns_discovery_info", 00:04:01.976 "bdev_nvme_stop_mdns_discovery", 00:04:01.976 "bdev_nvme_start_mdns_discovery", 00:04:01.976 "bdev_nvme_set_multipath_policy", 00:04:01.976 "bdev_nvme_set_preferred_path", 00:04:01.976 "bdev_nvme_get_io_paths", 00:04:01.976 "bdev_nvme_remove_error_injection", 00:04:01.976 "bdev_nvme_add_error_injection", 00:04:01.976 "bdev_nvme_get_discovery_info", 00:04:01.976 "bdev_nvme_stop_discovery", 00:04:01.976 "bdev_nvme_start_discovery", 00:04:01.976 "bdev_nvme_get_controller_health_info", 00:04:01.976 "bdev_nvme_disable_controller", 00:04:01.976 "bdev_nvme_enable_controller", 00:04:01.976 "bdev_nvme_reset_controller", 00:04:01.976 "bdev_nvme_get_transport_statistics", 00:04:01.976 "bdev_nvme_apply_firmware", 00:04:01.976 "bdev_nvme_detach_controller", 00:04:01.976 "bdev_nvme_get_controllers", 00:04:01.976 "bdev_nvme_attach_controller", 00:04:01.976 "bdev_nvme_set_hotplug", 00:04:01.976 "bdev_nvme_set_options", 00:04:01.976 "bdev_passthru_delete", 00:04:01.976 "bdev_passthru_create", 00:04:01.976 "bdev_lvol_set_parent_bdev", 00:04:01.976 "bdev_lvol_set_parent", 00:04:01.976 "bdev_lvol_check_shallow_copy", 00:04:01.976 "bdev_lvol_start_shallow_copy", 00:04:01.976 "bdev_lvol_grow_lvstore", 00:04:01.976 "bdev_lvol_get_lvols", 00:04:01.976 "bdev_lvol_get_lvstores", 00:04:01.976 "bdev_lvol_delete", 00:04:01.976 "bdev_lvol_set_read_only", 00:04:01.976 "bdev_lvol_resize", 00:04:01.976 "bdev_lvol_decouple_parent", 00:04:01.976 "bdev_lvol_inflate", 00:04:01.976 "bdev_lvol_rename", 00:04:01.976 "bdev_lvol_clone_bdev", 00:04:01.976 "bdev_lvol_clone", 00:04:01.976 "bdev_lvol_snapshot", 00:04:01.976 "bdev_lvol_create", 00:04:01.976 "bdev_lvol_delete_lvstore", 00:04:01.976 "bdev_lvol_rename_lvstore", 00:04:01.976 "bdev_lvol_create_lvstore", 00:04:01.976 "bdev_raid_set_options", 00:04:01.976 "bdev_raid_remove_base_bdev", 00:04:01.976 "bdev_raid_add_base_bdev", 00:04:01.976 "bdev_raid_delete", 00:04:01.976 "bdev_raid_create", 00:04:01.976 "bdev_raid_get_bdevs", 00:04:01.976 "bdev_error_inject_error", 00:04:01.976 "bdev_error_delete", 00:04:01.976 "bdev_error_create", 00:04:01.976 "bdev_split_delete", 00:04:01.976 "bdev_split_create", 00:04:01.976 "bdev_delay_delete", 00:04:01.976 "bdev_delay_create", 00:04:01.976 "bdev_delay_update_latency", 00:04:01.976 "bdev_zone_block_delete", 00:04:01.976 "bdev_zone_block_create", 00:04:01.976 "blobfs_create", 00:04:01.976 "blobfs_detect", 00:04:01.976 "blobfs_set_cache_size", 00:04:01.976 "bdev_aio_delete", 00:04:01.976 "bdev_aio_rescan", 00:04:01.976 "bdev_aio_create", 00:04:01.976 "bdev_ftl_set_property", 00:04:01.976 "bdev_ftl_get_properties", 00:04:01.976 "bdev_ftl_get_stats", 00:04:01.976 "bdev_ftl_unmap", 00:04:01.976 "bdev_ftl_unload", 00:04:01.976 "bdev_ftl_delete", 00:04:01.976 "bdev_ftl_load", 00:04:01.976 "bdev_ftl_create", 00:04:01.976 "bdev_virtio_attach_controller", 00:04:01.976 "bdev_virtio_scsi_get_devices", 00:04:01.976 "bdev_virtio_detach_controller", 00:04:01.976 "bdev_virtio_blk_set_hotplug", 00:04:01.976 "bdev_iscsi_delete", 00:04:01.976 "bdev_iscsi_create", 00:04:01.976 "bdev_iscsi_set_options", 00:04:01.976 "accel_error_inject_error", 00:04:01.976 "ioat_scan_accel_module", 00:04:01.976 "dsa_scan_accel_module", 00:04:01.976 "iaa_scan_accel_module", 00:04:01.976 "vfu_virtio_create_scsi_endpoint", 00:04:01.976 "vfu_virtio_scsi_remove_target", 00:04:01.976 "vfu_virtio_scsi_add_target", 00:04:01.976 "vfu_virtio_create_blk_endpoint", 00:04:01.976 "vfu_virtio_delete_endpoint", 00:04:01.976 "keyring_file_remove_key", 00:04:01.976 "keyring_file_add_key", 00:04:01.977 "keyring_linux_set_options", 00:04:01.977 "iscsi_get_histogram", 00:04:01.977 "iscsi_enable_histogram", 00:04:01.977 "iscsi_set_options", 00:04:01.977 "iscsi_get_auth_groups", 00:04:01.977 "iscsi_auth_group_remove_secret", 00:04:01.977 "iscsi_auth_group_add_secret", 00:04:01.977 "iscsi_delete_auth_group", 00:04:01.977 "iscsi_create_auth_group", 00:04:01.977 "iscsi_set_discovery_auth", 00:04:01.977 "iscsi_get_options", 00:04:01.977 "iscsi_target_node_request_logout", 00:04:01.977 "iscsi_target_node_set_redirect", 00:04:01.977 "iscsi_target_node_set_auth", 00:04:01.977 "iscsi_target_node_add_lun", 00:04:01.977 "iscsi_get_stats", 00:04:01.977 "iscsi_get_connections", 00:04:01.977 "iscsi_portal_group_set_auth", 00:04:01.977 "iscsi_start_portal_group", 00:04:01.977 "iscsi_delete_portal_group", 00:04:01.977 "iscsi_create_portal_group", 00:04:01.977 "iscsi_get_portal_groups", 00:04:01.977 "iscsi_delete_target_node", 00:04:01.977 "iscsi_target_node_remove_pg_ig_maps", 00:04:01.977 "iscsi_target_node_add_pg_ig_maps", 00:04:01.977 "iscsi_create_target_node", 00:04:01.977 "iscsi_get_target_nodes", 00:04:01.977 "iscsi_delete_initiator_group", 00:04:01.977 "iscsi_initiator_group_remove_initiators", 00:04:01.977 "iscsi_initiator_group_add_initiators", 00:04:01.977 "iscsi_create_initiator_group", 00:04:01.977 "iscsi_get_initiator_groups", 00:04:01.977 "nvmf_set_crdt", 00:04:01.977 "nvmf_set_config", 00:04:01.977 "nvmf_set_max_subsystems", 00:04:01.977 "nvmf_stop_mdns_prr", 00:04:01.977 "nvmf_publish_mdns_prr", 00:04:01.977 "nvmf_subsystem_get_listeners", 00:04:01.977 "nvmf_subsystem_get_qpairs", 00:04:01.977 "nvmf_subsystem_get_controllers", 00:04:01.977 "nvmf_get_stats", 00:04:01.977 "nvmf_get_transports", 00:04:01.977 "nvmf_create_transport", 00:04:01.977 "nvmf_get_targets", 00:04:01.977 "nvmf_delete_target", 00:04:01.977 "nvmf_create_target", 00:04:01.977 "nvmf_subsystem_allow_any_host", 00:04:01.977 "nvmf_subsystem_remove_host", 00:04:01.977 "nvmf_subsystem_add_host", 00:04:01.977 "nvmf_ns_remove_host", 00:04:01.977 "nvmf_ns_add_host", 00:04:01.977 "nvmf_subsystem_remove_ns", 00:04:01.977 "nvmf_subsystem_add_ns", 00:04:01.977 "nvmf_subsystem_listener_set_ana_state", 00:04:01.977 "nvmf_discovery_get_referrals", 00:04:01.977 "nvmf_discovery_remove_referral", 00:04:01.977 "nvmf_discovery_add_referral", 00:04:01.977 "nvmf_subsystem_remove_listener", 00:04:01.977 "nvmf_subsystem_add_listener", 00:04:01.977 "nvmf_delete_subsystem", 00:04:01.977 "nvmf_create_subsystem", 00:04:01.977 "nvmf_get_subsystems", 00:04:01.977 "env_dpdk_get_mem_stats", 00:04:01.977 "nbd_get_disks", 00:04:01.977 "nbd_stop_disk", 00:04:01.977 "nbd_start_disk", 00:04:01.977 "ublk_recover_disk", 00:04:01.977 "ublk_get_disks", 00:04:01.977 "ublk_stop_disk", 00:04:01.977 "ublk_start_disk", 00:04:01.977 "ublk_destroy_target", 00:04:01.977 "ublk_create_target", 00:04:01.977 "virtio_blk_create_transport", 00:04:01.977 "virtio_blk_get_transports", 00:04:01.977 "vhost_controller_set_coalescing", 00:04:01.977 "vhost_get_controllers", 00:04:01.977 "vhost_delete_controller", 00:04:01.977 "vhost_create_blk_controller", 00:04:01.977 "vhost_scsi_controller_remove_target", 00:04:01.977 "vhost_scsi_controller_add_target", 00:04:01.977 "vhost_start_scsi_controller", 00:04:01.977 "vhost_create_scsi_controller", 00:04:01.977 "thread_set_cpumask", 00:04:01.977 "framework_get_governor", 00:04:01.977 "framework_get_scheduler", 00:04:01.977 "framework_set_scheduler", 00:04:01.977 "framework_get_reactors", 00:04:01.977 "thread_get_io_channels", 00:04:01.977 "thread_get_pollers", 00:04:01.977 "thread_get_stats", 00:04:01.977 "framework_monitor_context_switch", 00:04:01.977 "spdk_kill_instance", 00:04:01.977 "log_enable_timestamps", 00:04:01.977 "log_get_flags", 00:04:01.977 "log_clear_flag", 00:04:01.977 "log_set_flag", 00:04:01.977 "log_get_level", 00:04:01.977 "log_set_level", 00:04:01.977 "log_get_print_level", 00:04:01.977 "log_set_print_level", 00:04:01.977 "framework_enable_cpumask_locks", 00:04:01.977 "framework_disable_cpumask_locks", 00:04:01.977 "framework_wait_init", 00:04:01.977 "framework_start_init", 00:04:01.977 "scsi_get_devices", 00:04:01.977 "bdev_get_histogram", 00:04:01.977 "bdev_enable_histogram", 00:04:01.977 "bdev_set_qos_limit", 00:04:01.977 "bdev_set_qd_sampling_period", 00:04:01.977 "bdev_get_bdevs", 00:04:01.977 "bdev_reset_iostat", 00:04:01.977 "bdev_get_iostat", 00:04:01.977 "bdev_examine", 00:04:01.977 "bdev_wait_for_examine", 00:04:01.977 "bdev_set_options", 00:04:01.977 "notify_get_notifications", 00:04:01.977 "notify_get_types", 00:04:01.977 "accel_get_stats", 00:04:01.977 "accel_set_options", 00:04:01.977 "accel_set_driver", 00:04:01.977 "accel_crypto_key_destroy", 00:04:01.977 "accel_crypto_keys_get", 00:04:01.977 "accel_crypto_key_create", 00:04:01.977 "accel_assign_opc", 00:04:01.977 "accel_get_module_info", 00:04:01.977 "accel_get_opc_assignments", 00:04:01.977 "vmd_rescan", 00:04:01.977 "vmd_remove_device", 00:04:01.977 "vmd_enable", 00:04:01.977 "sock_get_default_impl", 00:04:01.977 "sock_set_default_impl", 00:04:01.977 "sock_impl_set_options", 00:04:01.977 "sock_impl_get_options", 00:04:01.977 "iobuf_get_stats", 00:04:01.977 "iobuf_set_options", 00:04:01.977 "keyring_get_keys", 00:04:01.977 "framework_get_pci_devices", 00:04:01.977 "framework_get_config", 00:04:01.977 "framework_get_subsystems", 00:04:01.977 "vfu_tgt_set_base_path", 00:04:01.977 "trace_get_info", 00:04:01.977 "trace_get_tpoint_group_mask", 00:04:01.977 "trace_disable_tpoint_group", 00:04:01.977 "trace_enable_tpoint_group", 00:04:01.977 "trace_clear_tpoint_mask", 00:04:01.977 "trace_set_tpoint_mask", 00:04:01.977 "spdk_get_version", 00:04:01.977 "rpc_get_methods" 00:04:01.977 ] 00:04:01.977 22:27:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.977 22:27:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:01.977 22:27:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1136197 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@942 -- # '[' -z 1136197 ']' 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@946 -- # kill -0 1136197 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@947 -- # uname 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1136197 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1136197' 00:04:01.977 killing process with pid 1136197 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@961 -- # kill 1136197 00:04:01.977 22:27:45 spdkcli_tcp -- common/autotest_common.sh@966 -- # wait 1136197 00:04:02.543 00:04:02.543 real 0m1.295s 00:04:02.543 user 0m2.289s 00:04:02.543 sys 0m0.430s 00:04:02.543 22:27:45 spdkcli_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:02.543 22:27:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.543 ************************************ 00:04:02.543 END TEST spdkcli_tcp 00:04:02.543 ************************************ 00:04:02.543 22:27:45 -- common/autotest_common.sh@1136 -- # return 0 00:04:02.543 22:27:45 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.543 22:27:45 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:02.543 22:27:45 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:02.543 22:27:45 -- common/autotest_common.sh@10 -- # set +x 00:04:02.543 ************************************ 00:04:02.543 START TEST dpdk_mem_utility 00:04:02.543 ************************************ 00:04:02.543 22:27:45 dpdk_mem_utility -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.543 * Looking for test storage... 00:04:02.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:02.543 22:27:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:02.543 22:27:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1136420 00:04:02.543 22:27:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.543 22:27:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1136420 00:04:02.543 22:27:45 dpdk_mem_utility -- common/autotest_common.sh@823 -- # '[' -z 1136420 ']' 00:04:02.543 22:27:45 dpdk_mem_utility -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.543 22:27:45 dpdk_mem_utility -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:02.543 22:27:45 dpdk_mem_utility -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.543 22:27:45 dpdk_mem_utility -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:02.543 22:27:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:02.543 [2024-07-15 22:27:45.933258] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:02.543 [2024-07-15 22:27:45.933362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136420 ] 00:04:02.543 [2024-07-15 22:27:45.988875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.802 [2024-07-15 22:27:46.092659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.060 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:03.060 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@856 -- # return 0 00:04:03.060 22:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.060 22:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.060 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:03.060 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.060 { 00:04:03.060 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.060 } 00:04:03.060 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:03.060 22:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:03.060 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:03.060 1 heaps totaling size 814.000000 MiB 00:04:03.060 size: 814.000000 MiB heap id: 0 00:04:03.060 end heaps---------- 00:04:03.060 8 mempools totaling size 598.116089 MiB 00:04:03.060 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:03.060 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:03.060 size: 84.521057 MiB name: bdev_io_1136420 00:04:03.060 size: 51.011292 MiB name: evtpool_1136420 00:04:03.060 size: 50.003479 MiB name: msgpool_1136420 00:04:03.060 size: 21.763794 MiB name: PDU_Pool 00:04:03.060 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:03.060 size: 0.026123 MiB name: Session_Pool 00:04:03.060 end mempools------- 00:04:03.060 6 memzones totaling size 4.142822 MiB 00:04:03.060 size: 1.000366 MiB name: RG_ring_0_1136420 00:04:03.060 size: 1.000366 MiB name: RG_ring_1_1136420 00:04:03.060 size: 1.000366 MiB name: RG_ring_4_1136420 00:04:03.060 size: 1.000366 MiB name: RG_ring_5_1136420 00:04:03.060 size: 0.125366 MiB name: RG_ring_2_1136420 00:04:03.060 size: 0.015991 MiB name: RG_ring_3_1136420 00:04:03.060 end memzones------- 00:04:03.060 22:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.060 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:03.060 list of free elements. size: 12.519348 MiB 00:04:03.060 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:03.060 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:03.060 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:03.060 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:03.060 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:03.060 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:03.060 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:03.060 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:03.060 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:03.060 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:03.060 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:03.060 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:03.060 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:03.060 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:03.060 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:03.060 list of standard malloc elements. size: 199.218079 MiB 00:04:03.060 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:03.060 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:03.060 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:03.060 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:03.060 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:03.060 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:03.060 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:03.060 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:03.060 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:03.060 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:03.060 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:03.060 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:03.060 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:03.060 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:03.060 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:03.060 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:03.060 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:03.060 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:03.060 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:03.060 list of memzone associated elements. size: 602.262573 MiB 00:04:03.060 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:03.061 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.061 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:03.061 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:03.061 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:03.061 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1136420_0 00:04:03.061 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:03.061 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1136420_0 00:04:03.061 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:03.061 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1136420_0 00:04:03.061 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:03.061 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:03.061 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:03.061 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.061 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:03.061 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1136420 00:04:03.061 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:03.061 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1136420 00:04:03.061 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:03.061 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1136420 00:04:03.061 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:03.061 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.061 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:03.061 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.061 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:03.061 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.061 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:03.061 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.061 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:03.061 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1136420 00:04:03.061 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:03.061 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1136420 00:04:03.061 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:03.061 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1136420 00:04:03.061 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:03.061 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1136420 00:04:03.061 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:03.061 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1136420 00:04:03.061 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:03.061 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.061 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:03.061 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.061 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:03.061 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.061 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:03.061 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1136420 00:04:03.061 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:03.061 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.061 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:03.061 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.061 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:03.061 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1136420 00:04:03.061 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:03.061 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.061 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:03.061 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1136420 00:04:03.061 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:03.061 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1136420 00:04:03.061 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:03.061 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.061 22:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.061 22:27:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1136420 00:04:03.061 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@942 -- # '[' -z 1136420 ']' 00:04:03.061 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@946 -- # kill -0 1136420 00:04:03.061 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@947 -- # uname 00:04:03.061 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:03.061 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1136420 00:04:03.061 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:03.061 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:03.061 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1136420' 00:04:03.061 killing process with pid 1136420 00:04:03.061 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@961 -- # kill 1136420 00:04:03.061 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@966 -- # wait 1136420 00:04:03.627 00:04:03.627 real 0m1.114s 00:04:03.627 user 0m1.070s 00:04:03.627 sys 0m0.415s 00:04:03.627 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:03.627 22:27:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.627 ************************************ 00:04:03.627 END TEST dpdk_mem_utility 00:04:03.627 ************************************ 00:04:03.627 22:27:46 -- common/autotest_common.sh@1136 -- # return 0 00:04:03.627 22:27:46 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.627 22:27:46 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:03.627 22:27:46 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:03.627 22:27:46 -- common/autotest_common.sh@10 -- # set +x 00:04:03.627 ************************************ 00:04:03.627 START TEST event 00:04:03.627 ************************************ 00:04:03.627 22:27:46 event -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.627 * Looking for test storage... 00:04:03.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:03.627 22:27:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:03.627 22:27:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:03.627 22:27:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:03.627 22:27:47 event -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:04:03.627 22:27:47 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:03.627 22:27:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.627 ************************************ 00:04:03.627 START TEST event_perf 00:04:03.627 ************************************ 00:04:03.627 22:27:47 event.event_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:03.627 Running I/O for 1 seconds...[2024-07-15 22:27:47.084327] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:03.627 [2024-07-15 22:27:47.084388] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136610 ] 00:04:03.885 [2024-07-15 22:27:47.146289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:03.886 [2024-07-15 22:27:47.264656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.886 [2024-07-15 22:27:47.264712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:03.886 [2024-07-15 22:27:47.264776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:03.886 [2024-07-15 22:27:47.264779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.265 Running I/O for 1 seconds... 00:04:05.265 lcore 0: 230904 00:04:05.265 lcore 1: 230903 00:04:05.265 lcore 2: 230904 00:04:05.265 lcore 3: 230905 00:04:05.265 done. 00:04:05.265 00:04:05.265 real 0m1.318s 00:04:05.265 user 0m4.230s 00:04:05.265 sys 0m0.084s 00:04:05.265 22:27:48 event.event_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:05.265 22:27:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:05.265 ************************************ 00:04:05.265 END TEST event_perf 00:04:05.265 ************************************ 00:04:05.265 22:27:48 event -- common/autotest_common.sh@1136 -- # return 0 00:04:05.265 22:27:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.265 22:27:48 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:04:05.265 22:27:48 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:05.265 22:27:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.265 ************************************ 00:04:05.265 START TEST event_reactor 00:04:05.265 ************************************ 00:04:05.265 22:27:48 event.event_reactor -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.265 [2024-07-15 22:27:48.445246] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:05.265 [2024-07-15 22:27:48.445311] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136771 ] 00:04:05.265 [2024-07-15 22:27:48.507611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.265 [2024-07-15 22:27:48.624486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.670 test_start 00:04:06.670 oneshot 00:04:06.670 tick 100 00:04:06.670 tick 100 00:04:06.670 tick 250 00:04:06.670 tick 100 00:04:06.670 tick 100 00:04:06.670 tick 100 00:04:06.670 tick 250 00:04:06.670 tick 500 00:04:06.670 tick 100 00:04:06.670 tick 100 00:04:06.670 tick 250 00:04:06.670 tick 100 00:04:06.670 tick 100 00:04:06.670 test_end 00:04:06.670 00:04:06.670 real 0m1.311s 00:04:06.670 user 0m1.222s 00:04:06.670 sys 0m0.084s 00:04:06.670 22:27:49 event.event_reactor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:06.670 22:27:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:06.670 ************************************ 00:04:06.670 END TEST event_reactor 00:04:06.670 ************************************ 00:04:06.670 22:27:49 event -- common/autotest_common.sh@1136 -- # return 0 00:04:06.670 22:27:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.670 22:27:49 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:04:06.670 22:27:49 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:06.670 22:27:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.670 ************************************ 00:04:06.670 START TEST event_reactor_perf 00:04:06.670 ************************************ 00:04:06.670 22:27:49 event.event_reactor_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.670 [2024-07-15 22:27:49.796471] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:06.670 [2024-07-15 22:27:49.796538] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136935 ] 00:04:06.670 [2024-07-15 22:27:49.859614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.670 [2024-07-15 22:27:49.977733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.604 test_start 00:04:07.604 test_end 00:04:07.604 Performance: 363631 events per second 00:04:07.604 00:04:07.604 real 0m1.319s 00:04:07.604 user 0m1.232s 00:04:07.604 sys 0m0.082s 00:04:07.604 22:27:51 event.event_reactor_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:07.604 22:27:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.604 ************************************ 00:04:07.604 END TEST event_reactor_perf 00:04:07.604 ************************************ 00:04:07.863 22:27:51 event -- common/autotest_common.sh@1136 -- # return 0 00:04:07.863 22:27:51 event -- event/event.sh@49 -- # uname -s 00:04:07.863 22:27:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:07.863 22:27:51 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.863 22:27:51 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:07.863 22:27:51 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:07.863 22:27:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.863 ************************************ 00:04:07.863 START TEST event_scheduler 00:04:07.863 ************************************ 00:04:07.863 22:27:51 event.event_scheduler -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.863 * Looking for test storage... 00:04:07.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:07.863 22:27:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:07.863 22:27:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1137115 00:04:07.863 22:27:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:07.863 22:27:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.863 22:27:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1137115 00:04:07.863 22:27:51 event.event_scheduler -- common/autotest_common.sh@823 -- # '[' -z 1137115 ']' 00:04:07.863 22:27:51 event.event_scheduler -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.863 22:27:51 event.event_scheduler -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:07.863 22:27:51 event.event_scheduler -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.863 22:27:51 event.event_scheduler -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:07.863 22:27:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.863 [2024-07-15 22:27:51.241187] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:07.863 [2024-07-15 22:27:51.241263] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137115 ] 00:04:07.863 [2024-07-15 22:27:51.304659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:08.122 [2024-07-15 22:27:51.417895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.122 [2024-07-15 22:27:51.417955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:08.122 [2024-07-15 22:27:51.417958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:08.122 [2024-07-15 22:27:51.417918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@856 -- # return 0 00:04:08.122 22:27:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.122 [2024-07-15 22:27:51.466758] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:08.122 [2024-07-15 22:27:51.466784] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:08.122 [2024-07-15 22:27:51.466816] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:08.122 [2024-07-15 22:27:51.466827] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:08.122 [2024-07-15 22:27:51.466837] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.122 22:27:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.122 [2024-07-15 22:27:51.563103] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.122 22:27:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:08.122 22:27:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.122 ************************************ 00:04:08.122 START TEST scheduler_create_thread 00:04:08.122 ************************************ 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1117 -- # scheduler_create_thread 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.122 2 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.122 3 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.122 4 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.122 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.381 5 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.381 6 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.381 7 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.381 8 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.381 9 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.381 10 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:08.381 22:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.947 22:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:08.948 00:04:08.948 real 0m0.591s 00:04:08.948 user 0m0.011s 00:04:08.948 sys 0m0.002s 00:04:08.948 22:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:08.948 22:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.948 ************************************ 00:04:08.948 END TEST scheduler_create_thread 00:04:08.948 ************************************ 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@1136 -- # return 0 00:04:08.948 22:27:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:08.948 22:27:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1137115 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@942 -- # '[' -z 1137115 ']' 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@946 -- # kill -0 1137115 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@947 -- # uname 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1137115 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1137115' 00:04:08.948 killing process with pid 1137115 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@961 -- # kill 1137115 00:04:08.948 22:27:52 event.event_scheduler -- common/autotest_common.sh@966 -- # wait 1137115 00:04:09.206 [2024-07-15 22:27:52.663292] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:09.465 00:04:09.465 real 0m1.776s 00:04:09.466 user 0m2.236s 00:04:09.466 sys 0m0.351s 00:04:09.466 22:27:52 event.event_scheduler -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:09.466 22:27:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:09.466 ************************************ 00:04:09.466 END TEST event_scheduler 00:04:09.466 ************************************ 00:04:09.466 22:27:52 event -- common/autotest_common.sh@1136 -- # return 0 00:04:09.466 22:27:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:09.466 22:27:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:09.466 22:27:52 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:09.466 22:27:52 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:09.466 22:27:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.725 ************************************ 00:04:09.725 START TEST app_repeat 00:04:09.725 ************************************ 00:04:09.725 22:27:52 event.app_repeat -- common/autotest_common.sh@1117 -- # app_repeat_test 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1137421 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1137421' 00:04:09.725 Process app_repeat pid: 1137421 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:09.725 spdk_app_start Round 0 00:04:09.725 22:27:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1137421 /var/tmp/spdk-nbd.sock 00:04:09.725 22:27:52 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1137421 ']' 00:04:09.725 22:27:52 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:09.725 22:27:52 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:09.725 22:27:52 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:09.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:09.725 22:27:52 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:09.725 22:27:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:09.725 [2024-07-15 22:27:53.007203] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:09.725 [2024-07-15 22:27:53.007273] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137421 ] 00:04:09.725 [2024-07-15 22:27:53.070583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.725 [2024-07-15 22:27:53.190857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.725 [2024-07-15 22:27:53.190872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.982 22:27:53 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:09.982 22:27:53 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:04:09.982 22:27:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:10.240 Malloc0 00:04:10.240 22:27:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:10.498 Malloc1 00:04:10.498 22:27:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:10.498 22:27:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:10.755 /dev/nbd0 00:04:10.755 22:27:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:10.755 22:27:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:10.755 1+0 records in 00:04:10.755 1+0 records out 00:04:10.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174323 s, 23.5 MB/s 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:10.755 22:27:54 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.756 22:27:54 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:10.756 22:27:54 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:10.756 22:27:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:10.756 22:27:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:10.756 22:27:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:11.013 /dev/nbd1 00:04:11.013 22:27:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:11.013 22:27:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.013 1+0 records in 00:04:11.013 1+0 records out 00:04:11.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207723 s, 19.7 MB/s 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:11.013 22:27:54 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:11.013 22:27:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.013 22:27:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.013 22:27:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:11.013 22:27:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.013 22:27:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:11.271 22:27:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:11.271 { 00:04:11.271 "nbd_device": "/dev/nbd0", 00:04:11.272 "bdev_name": "Malloc0" 00:04:11.272 }, 00:04:11.272 { 00:04:11.272 "nbd_device": "/dev/nbd1", 00:04:11.272 "bdev_name": "Malloc1" 00:04:11.272 } 00:04:11.272 ]' 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:11.272 { 00:04:11.272 "nbd_device": "/dev/nbd0", 00:04:11.272 "bdev_name": "Malloc0" 00:04:11.272 }, 00:04:11.272 { 00:04:11.272 "nbd_device": "/dev/nbd1", 00:04:11.272 "bdev_name": "Malloc1" 00:04:11.272 } 00:04:11.272 ]' 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:11.272 /dev/nbd1' 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:11.272 /dev/nbd1' 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:11.272 256+0 records in 00:04:11.272 256+0 records out 00:04:11.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511969 s, 205 MB/s 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:11.272 256+0 records in 00:04:11.272 256+0 records out 00:04:11.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239317 s, 43.8 MB/s 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:11.272 256+0 records in 00:04:11.272 256+0 records out 00:04:11.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259279 s, 40.4 MB/s 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:11.272 22:27:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:11.530 22:27:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:11.530 22:27:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:11.530 22:27:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:11.530 22:27:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:11.530 22:27:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:11.530 22:27:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:11.530 22:27:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:11.530 22:27:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:11.530 22:27:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:11.530 22:27:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.788 22:27:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:12.047 22:27:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:12.047 22:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:12.047 22:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:12.305 22:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:12.305 22:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:12.305 22:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:12.305 22:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:12.305 22:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:12.305 22:27:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:12.305 22:27:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:12.305 22:27:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:12.305 22:27:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:12.305 22:27:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:12.562 22:27:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:12.821 [2024-07-15 22:27:56.125815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:12.821 [2024-07-15 22:27:56.240845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.821 [2024-07-15 22:27:56.240845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.821 [2024-07-15 22:27:56.302338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:12.821 [2024-07-15 22:27:56.302417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:15.348 22:27:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:15.348 22:27:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:15.348 spdk_app_start Round 1 00:04:15.348 22:27:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1137421 /var/tmp/spdk-nbd.sock 00:04:15.605 22:27:58 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1137421 ']' 00:04:15.606 22:27:58 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:15.606 22:27:58 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:15.606 22:27:58 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:15.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:15.606 22:27:58 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:15.606 22:27:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:15.606 22:27:59 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:15.606 22:27:59 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:04:15.606 22:27:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:15.864 Malloc0 00:04:16.122 22:27:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.123 Malloc1 00:04:16.380 22:27:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:16.380 22:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:16.381 22:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.381 22:27:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:16.381 /dev/nbd0 00:04:16.639 22:27:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:16.639 22:27:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.639 1+0 records in 00:04:16.639 1+0 records out 00:04:16.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161145 s, 25.4 MB/s 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:16.639 22:27:59 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:16.639 22:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.639 22:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.639 22:27:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:16.897 /dev/nbd1 00:04:16.897 22:28:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:16.897 22:28:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.897 1+0 records in 00:04:16.897 1+0 records out 00:04:16.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212074 s, 19.3 MB/s 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:16.897 22:28:00 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:16.897 22:28:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.897 22:28:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.897 22:28:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.897 22:28:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.897 22:28:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.155 22:28:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:17.155 { 00:04:17.155 "nbd_device": "/dev/nbd0", 00:04:17.155 "bdev_name": "Malloc0" 00:04:17.155 }, 00:04:17.155 { 00:04:17.156 "nbd_device": "/dev/nbd1", 00:04:17.156 "bdev_name": "Malloc1" 00:04:17.156 } 00:04:17.156 ]' 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:17.156 { 00:04:17.156 "nbd_device": "/dev/nbd0", 00:04:17.156 "bdev_name": "Malloc0" 00:04:17.156 }, 00:04:17.156 { 00:04:17.156 "nbd_device": "/dev/nbd1", 00:04:17.156 "bdev_name": "Malloc1" 00:04:17.156 } 00:04:17.156 ]' 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:17.156 /dev/nbd1' 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:17.156 /dev/nbd1' 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:17.156 256+0 records in 00:04:17.156 256+0 records out 00:04:17.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049898 s, 210 MB/s 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:17.156 256+0 records in 00:04:17.156 256+0 records out 00:04:17.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235358 s, 44.6 MB/s 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:17.156 256+0 records in 00:04:17.156 256+0 records out 00:04:17.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229215 s, 45.7 MB/s 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.156 22:28:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:17.414 22:28:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:17.414 22:28:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:17.414 22:28:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:17.414 22:28:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.414 22:28:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.414 22:28:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:17.414 22:28:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:17.414 22:28:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.414 22:28:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.414 22:28:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.673 22:28:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:17.930 22:28:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:17.930 22:28:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:18.188 22:28:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:18.752 [2024-07-15 22:28:01.951925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.752 [2024-07-15 22:28:02.065866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.752 [2024-07-15 22:28:02.065870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.752 [2024-07-15 22:28:02.128362] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:18.752 [2024-07-15 22:28:02.128440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:21.276 22:28:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:21.276 22:28:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:21.276 spdk_app_start Round 2 00:04:21.276 22:28:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1137421 /var/tmp/spdk-nbd.sock 00:04:21.276 22:28:04 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1137421 ']' 00:04:21.276 22:28:04 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:21.276 22:28:04 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:21.276 22:28:04 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:21.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:21.276 22:28:04 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:21.276 22:28:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:21.534 22:28:04 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:21.534 22:28:04 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:04:21.534 22:28:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.832 Malloc0 00:04:21.832 22:28:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.090 Malloc1 00:04:22.090 22:28:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.090 22:28:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:22.348 /dev/nbd0 00:04:22.348 22:28:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:22.348 22:28:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:22.348 22:28:05 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:04:22.348 22:28:05 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.349 1+0 records in 00:04:22.349 1+0 records out 00:04:22.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163214 s, 25.1 MB/s 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:22.349 22:28:05 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:22.349 22:28:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.349 22:28:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.349 22:28:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:22.607 /dev/nbd1 00:04:22.607 22:28:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:22.607 22:28:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:22.607 22:28:05 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:04:22.607 22:28:05 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:22.607 22:28:05 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:22.607 22:28:05 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:22.607 22:28:05 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:04:22.607 22:28:06 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:22.607 22:28:06 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:22.607 22:28:06 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:22.607 22:28:06 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.607 1+0 records in 00:04:22.607 1+0 records out 00:04:22.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201279 s, 20.3 MB/s 00:04:22.607 22:28:06 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.607 22:28:06 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:22.607 22:28:06 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.607 22:28:06 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:22.607 22:28:06 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:22.607 22:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.607 22:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.607 22:28:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:22.607 22:28:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.607 22:28:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:22.865 { 00:04:22.865 "nbd_device": "/dev/nbd0", 00:04:22.865 "bdev_name": "Malloc0" 00:04:22.865 }, 00:04:22.865 { 00:04:22.865 "nbd_device": "/dev/nbd1", 00:04:22.865 "bdev_name": "Malloc1" 00:04:22.865 } 00:04:22.865 ]' 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:22.865 { 00:04:22.865 "nbd_device": "/dev/nbd0", 00:04:22.865 "bdev_name": "Malloc0" 00:04:22.865 }, 00:04:22.865 { 00:04:22.865 "nbd_device": "/dev/nbd1", 00:04:22.865 "bdev_name": "Malloc1" 00:04:22.865 } 00:04:22.865 ]' 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:22.865 /dev/nbd1' 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:22.865 /dev/nbd1' 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.865 22:28:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:22.866 256+0 records in 00:04:22.866 256+0 records out 00:04:22.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495334 s, 212 MB/s 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:22.866 256+0 records in 00:04:22.866 256+0 records out 00:04:22.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200703 s, 52.2 MB/s 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:22.866 256+0 records in 00:04:22.866 256+0 records out 00:04:22.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251568 s, 41.7 MB/s 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:22.866 22:28:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.124 22:28:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:23.382 22:28:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:23.382 22:28:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:23.382 22:28:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:23.382 22:28:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.382 22:28:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.382 22:28:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:23.382 22:28:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:23.382 22:28:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.382 22:28:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.382 22:28:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.641 22:28:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.898 22:28:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:23.898 22:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:23.898 22:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.899 22:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:23.899 22:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:23.899 22:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.899 22:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:23.899 22:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:23.899 22:28:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:23.899 22:28:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:23.899 22:28:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:23.899 22:28:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:23.899 22:28:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.156 22:28:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:24.414 [2024-07-15 22:28:07.758218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.414 [2024-07-15 22:28:07.873128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.414 [2024-07-15 22:28:07.873128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.671 [2024-07-15 22:28:07.935818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:24.671 [2024-07-15 22:28:07.935903] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:27.194 22:28:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1137421 /var/tmp/spdk-nbd.sock 00:04:27.194 22:28:10 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1137421 ']' 00:04:27.194 22:28:10 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:27.194 22:28:10 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:27.194 22:28:10 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:27.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:27.194 22:28:10 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:27.194 22:28:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:04:27.451 22:28:10 event.app_repeat -- event/event.sh@39 -- # killprocess 1137421 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@942 -- # '[' -z 1137421 ']' 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@946 -- # kill -0 1137421 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@947 -- # uname 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1137421 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1137421' 00:04:27.451 killing process with pid 1137421 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@961 -- # kill 1137421 00:04:27.451 22:28:10 event.app_repeat -- common/autotest_common.sh@966 -- # wait 1137421 00:04:27.708 spdk_app_start is called in Round 0. 00:04:27.708 Shutdown signal received, stop current app iteration 00:04:27.708 Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 reinitialization... 00:04:27.708 spdk_app_start is called in Round 1. 00:04:27.709 Shutdown signal received, stop current app iteration 00:04:27.709 Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 reinitialization... 00:04:27.709 spdk_app_start is called in Round 2. 00:04:27.709 Shutdown signal received, stop current app iteration 00:04:27.709 Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 reinitialization... 00:04:27.709 spdk_app_start is called in Round 3. 00:04:27.709 Shutdown signal received, stop current app iteration 00:04:27.709 22:28:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:27.709 22:28:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:27.709 00:04:27.709 real 0m18.037s 00:04:27.709 user 0m38.998s 00:04:27.709 sys 0m3.258s 00:04:27.709 22:28:11 event.app_repeat -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:27.709 22:28:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:27.709 ************************************ 00:04:27.709 END TEST app_repeat 00:04:27.709 ************************************ 00:04:27.709 22:28:11 event -- common/autotest_common.sh@1136 -- # return 0 00:04:27.709 22:28:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:27.709 22:28:11 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:27.709 22:28:11 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:27.709 22:28:11 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:27.709 22:28:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.709 ************************************ 00:04:27.709 START TEST cpu_locks 00:04:27.709 ************************************ 00:04:27.709 22:28:11 event.cpu_locks -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:27.709 * Looking for test storage... 00:04:27.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:27.709 22:28:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:27.709 22:28:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:27.709 22:28:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:27.709 22:28:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:27.709 22:28:11 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:27.709 22:28:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:27.709 22:28:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:27.709 ************************************ 00:04:27.709 START TEST default_locks 00:04:27.709 ************************************ 00:04:27.709 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1117 -- # default_locks 00:04:27.709 22:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1139780 00:04:27.709 22:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.709 22:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1139780 00:04:27.709 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@823 -- # '[' -z 1139780 ']' 00:04:27.709 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.709 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:27.709 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.709 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:27.709 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:27.709 [2024-07-15 22:28:11.200506] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:27.709 [2024-07-15 22:28:11.200599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139780 ] 00:04:27.967 [2024-07-15 22:28:11.266678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.967 [2024-07-15 22:28:11.384167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.226 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:28.226 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # return 0 00:04:28.226 22:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1139780 00:04:28.226 22:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1139780 00:04:28.226 22:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:28.484 lslocks: write error 00:04:28.484 22:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1139780 00:04:28.484 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@942 -- # '[' -z 1139780 ']' 00:04:28.484 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # kill -0 1139780 00:04:28.484 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # uname 00:04:28.484 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:28.484 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1139780 00:04:28.742 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:28.742 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:28.742 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1139780' 00:04:28.742 killing process with pid 1139780 00:04:28.742 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@961 -- # kill 1139780 00:04:28.742 22:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # wait 1139780 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1139780 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # local es=0 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 1139780 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # waitforlisten 1139780 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@823 -- # '[' -z 1139780 ']' 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (1139780) - No such process 00:04:29.000 ERROR: process (pid: 1139780) is no longer running 00:04:29.000 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # return 1 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # es=1 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:29.001 00:04:29.001 real 0m1.302s 00:04:29.001 user 0m1.256s 00:04:29.001 sys 0m0.524s 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:29.001 22:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.001 ************************************ 00:04:29.001 END TEST default_locks 00:04:29.001 ************************************ 00:04:29.001 22:28:12 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:04:29.001 22:28:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:29.001 22:28:12 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:29.001 22:28:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:29.001 22:28:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.259 ************************************ 00:04:29.259 START TEST default_locks_via_rpc 00:04:29.259 ************************************ 00:04:29.259 22:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1117 -- # default_locks_via_rpc 00:04:29.259 22:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1139942 00:04:29.259 22:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.259 22:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1139942 00:04:29.259 22:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1139942 ']' 00:04:29.259 22:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.259 22:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:29.259 22:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.259 22:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:29.259 22:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.259 [2024-07-15 22:28:12.561027] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:29.259 [2024-07-15 22:28:12.561102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139942 ] 00:04:29.259 [2024-07-15 22:28:12.626503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.259 [2024-07-15 22:28:12.745192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1139942 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1139942 00:04:30.192 22:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1139942 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@942 -- # '[' -z 1139942 ']' 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # kill -0 1139942 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # uname 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1139942 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1139942' 00:04:30.450 killing process with pid 1139942 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@961 -- # kill 1139942 00:04:30.450 22:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # wait 1139942 00:04:31.016 00:04:31.016 real 0m1.717s 00:04:31.016 user 0m1.840s 00:04:31.016 sys 0m0.557s 00:04:31.016 22:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:31.016 22:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.016 ************************************ 00:04:31.016 END TEST default_locks_via_rpc 00:04:31.016 ************************************ 00:04:31.016 22:28:14 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:04:31.016 22:28:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:31.016 22:28:14 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:31.016 22:28:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:31.016 22:28:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.016 ************************************ 00:04:31.016 START TEST non_locking_app_on_locked_coremask 00:04:31.016 ************************************ 00:04:31.016 22:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1117 -- # non_locking_app_on_locked_coremask 00:04:31.016 22:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1140236 00:04:31.016 22:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.016 22:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1140236 /var/tmp/spdk.sock 00:04:31.016 22:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1140236 ']' 00:04:31.017 22:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.017 22:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:31.017 22:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.017 22:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:31.017 22:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.017 [2024-07-15 22:28:14.327586] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:31.017 [2024-07-15 22:28:14.327665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140236 ] 00:04:31.017 [2024-07-15 22:28:14.383071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.017 [2024-07-15 22:28:14.497172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1140368 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1140368 /var/tmp/spdk2.sock 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1140368 ']' 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:31.949 22:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.949 [2024-07-15 22:28:15.301522] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:31.949 [2024-07-15 22:28:15.301603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140368 ] 00:04:31.949 [2024-07-15 22:28:15.395404] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:31.949 [2024-07-15 22:28:15.395447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.207 [2024-07-15 22:28:15.629493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.773 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:32.773 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:04:32.773 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1140236 00:04:32.773 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1140236 00:04:32.773 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:33.337 lslocks: write error 00:04:33.337 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1140236 00:04:33.337 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1140236 ']' 00:04:33.337 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 1140236 00:04:33.337 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:04:33.337 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:33.338 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1140236 00:04:33.338 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:33.338 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:33.338 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1140236' 00:04:33.338 killing process with pid 1140236 00:04:33.338 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 1140236 00:04:33.338 22:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 1140236 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1140368 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1140368 ']' 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 1140368 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1140368 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1140368' 00:04:34.269 killing process with pid 1140368 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 1140368 00:04:34.269 22:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 1140368 00:04:34.834 00:04:34.834 real 0m3.906s 00:04:34.834 user 0m4.244s 00:04:34.834 sys 0m1.089s 00:04:34.834 22:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:34.834 22:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.834 ************************************ 00:04:34.834 END TEST non_locking_app_on_locked_coremask 00:04:34.834 ************************************ 00:04:34.834 22:28:18 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:04:34.834 22:28:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:34.834 22:28:18 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:34.834 22:28:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:34.834 22:28:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.834 ************************************ 00:04:34.834 START TEST locking_app_on_unlocked_coremask 00:04:34.834 ************************************ 00:04:34.834 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1117 -- # locking_app_on_unlocked_coremask 00:04:34.834 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1140704 00:04:34.834 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:34.834 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1140704 /var/tmp/spdk.sock 00:04:34.834 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1140704 ']' 00:04:34.834 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.834 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:34.834 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.834 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:34.834 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.834 [2024-07-15 22:28:18.280283] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:34.834 [2024-07-15 22:28:18.280378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140704 ] 00:04:35.092 [2024-07-15 22:28:18.338972] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:35.092 [2024-07-15 22:28:18.339027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.092 [2024-07-15 22:28:18.448013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # return 0 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1140808 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1140808 /var/tmp/spdk2.sock 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1140808 ']' 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:35.351 22:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.351 [2024-07-15 22:28:18.754354] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:35.351 [2024-07-15 22:28:18.754444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140808 ] 00:04:35.351 [2024-07-15 22:28:18.846647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.648 [2024-07-15 22:28:19.079368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.215 22:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:36.215 22:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # return 0 00:04:36.215 22:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1140808 00:04:36.215 22:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.215 22:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1140808 00:04:36.780 lslocks: write error 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1140704 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1140704 ']' 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # kill -0 1140704 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # uname 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1140704 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1140704' 00:04:36.780 killing process with pid 1140704 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill 1140704 00:04:36.780 22:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # wait 1140704 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1140808 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1140808 ']' 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # kill -0 1140808 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # uname 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1140808 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1140808' 00:04:37.729 killing process with pid 1140808 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill 1140808 00:04:37.729 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # wait 1140808 00:04:38.296 00:04:38.296 real 0m3.296s 00:04:38.296 user 0m3.446s 00:04:38.296 sys 0m1.045s 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.296 ************************************ 00:04:38.296 END TEST locking_app_on_unlocked_coremask 00:04:38.296 ************************************ 00:04:38.296 22:28:21 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:04:38.296 22:28:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:38.296 22:28:21 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:38.296 22:28:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:38.296 22:28:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.296 ************************************ 00:04:38.296 START TEST locking_app_on_locked_coremask 00:04:38.296 ************************************ 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1117 -- # locking_app_on_locked_coremask 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1141116 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1141116 /var/tmp/spdk.sock 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1141116 ']' 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:38.296 22:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.296 [2024-07-15 22:28:21.630800] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:38.296 [2024-07-15 22:28:21.630903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141116 ] 00:04:38.296 [2024-07-15 22:28:21.694749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.555 [2024-07-15 22:28:21.811728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1141245 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1141245 /var/tmp/spdk2.sock 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # local es=0 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 1141245 /var/tmp/spdk2.sock 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # waitforlisten 1141245 /var/tmp/spdk2.sock 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1141245 ']' 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:38.813 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.813 [2024-07-15 22:28:22.130373] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:38.813 [2024-07-15 22:28:22.130458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141245 ] 00:04:38.813 [2024-07-15 22:28:22.224908] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1141116 has claimed it. 00:04:38.813 [2024-07-15 22:28:22.224984] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:39.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (1141245) - No such process 00:04:39.378 ERROR: process (pid: 1141245) is no longer running 00:04:39.378 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:39.378 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 1 00:04:39.378 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # es=1 00:04:39.378 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:39.378 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:39.378 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:39.378 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1141116 00:04:39.378 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1141116 00:04:39.378 22:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.636 lslocks: write error 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1141116 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1141116 ']' 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 1141116 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1141116 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1141116' 00:04:39.636 killing process with pid 1141116 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 1141116 00:04:39.636 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 1141116 00:04:40.202 00:04:40.202 real 0m2.007s 00:04:40.202 user 0m2.155s 00:04:40.202 sys 0m0.634s 00:04:40.202 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:40.202 22:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.202 ************************************ 00:04:40.202 END TEST locking_app_on_locked_coremask 00:04:40.202 ************************************ 00:04:40.202 22:28:23 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:04:40.202 22:28:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:40.202 22:28:23 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:40.202 22:28:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:40.202 22:28:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.202 ************************************ 00:04:40.202 START TEST locking_overlapped_coremask 00:04:40.202 ************************************ 00:04:40.202 22:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1117 -- # locking_overlapped_coremask 00:04:40.202 22:28:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1141414 00:04:40.202 22:28:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1141414 /var/tmp/spdk.sock 00:04:40.202 22:28:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:40.202 22:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@823 -- # '[' -z 1141414 ']' 00:04:40.202 22:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.202 22:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:40.202 22:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.202 22:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:40.202 22:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.202 [2024-07-15 22:28:23.692033] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:40.202 [2024-07-15 22:28:23.692111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141414 ] 00:04:40.460 [2024-07-15 22:28:23.753688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:40.460 [2024-07-15 22:28:23.869627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.460 [2024-07-15 22:28:23.869708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.460 [2024-07-15 22:28:23.869712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # return 0 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1141552 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1141552 /var/tmp/spdk2.sock 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # local es=0 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 1141552 /var/tmp/spdk2.sock 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # waitforlisten 1141552 /var/tmp/spdk2.sock 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@823 -- # '[' -z 1141552 ']' 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:41.393 22:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.393 [2024-07-15 22:28:24.678641] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:41.393 [2024-07-15 22:28:24.678725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141552 ] 00:04:41.393 [2024-07-15 22:28:24.767666] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1141414 has claimed it. 00:04:41.393 [2024-07-15 22:28:24.767733] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:41.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (1141552) - No such process 00:04:41.959 ERROR: process (pid: 1141552) is no longer running 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # return 1 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # es=1 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1141414 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@942 -- # '[' -z 1141414 ']' 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # kill -0 1141414 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # uname 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1141414 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1141414' 00:04:41.959 killing process with pid 1141414 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@961 -- # kill 1141414 00:04:41.959 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # wait 1141414 00:04:42.526 00:04:42.526 real 0m2.207s 00:04:42.526 user 0m6.175s 00:04:42.526 sys 0m0.476s 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.526 ************************************ 00:04:42.526 END TEST locking_overlapped_coremask 00:04:42.526 ************************************ 00:04:42.526 22:28:25 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:04:42.526 22:28:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:42.526 22:28:25 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:42.526 22:28:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:42.526 22:28:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.526 ************************************ 00:04:42.526 START TEST locking_overlapped_coremask_via_rpc 00:04:42.526 ************************************ 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1117 -- # locking_overlapped_coremask_via_rpc 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1141722 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1141722 /var/tmp/spdk.sock 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1141722 ']' 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:42.526 22:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.526 [2024-07-15 22:28:25.951956] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:42.526 [2024-07-15 22:28:25.952034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141722 ] 00:04:42.526 [2024-07-15 22:28:26.013214] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.526 [2024-07-15 22:28:26.013251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:42.785 [2024-07-15 22:28:26.128583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.785 [2024-07-15 22:28:26.128650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.785 [2024-07-15 22:28:26.128653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1141860 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1141860 /var/tmp/spdk2.sock 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1141860 ']' 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:43.718 22:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.718 [2024-07-15 22:28:26.940253] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:43.718 [2024-07-15 22:28:26.940336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141860 ] 00:04:43.718 [2024-07-15 22:28:27.027461] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.719 [2024-07-15 22:28:27.027499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.975 [2024-07-15 22:28:27.246918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.975 [2024-07-15 22:28:27.249927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:43.975 [2024-07-15 22:28:27.249930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # local es=0 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.538 [2024-07-15 22:28:27.887978] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1141722 has claimed it. 00:04:44.538 request: 00:04:44.538 { 00:04:44.538 "method": "framework_enable_cpumask_locks", 00:04:44.538 "req_id": 1 00:04:44.538 } 00:04:44.538 Got JSON-RPC error response 00:04:44.538 response: 00:04:44.538 { 00:04:44.538 "code": -32603, 00:04:44.538 "message": "Failed to claim CPU core: 2" 00:04:44.538 } 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # es=1 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1141722 /var/tmp/spdk.sock 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1141722 ']' 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.538 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:44.539 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.539 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:44.539 22:28:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.795 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:44.795 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:04:44.795 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1141860 /var/tmp/spdk2.sock 00:04:44.795 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1141860 ']' 00:04:44.795 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.795 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:44.795 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.795 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:44.795 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.051 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:45.051 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:04:45.051 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:45.051 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:45.051 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:45.051 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:45.051 00:04:45.051 real 0m2.482s 00:04:45.051 user 0m1.204s 00:04:45.051 sys 0m0.212s 00:04:45.051 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:45.051 22:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.051 ************************************ 00:04:45.051 END TEST locking_overlapped_coremask_via_rpc 00:04:45.051 ************************************ 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:04:45.051 22:28:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:45.051 22:28:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1141722 ]] 00:04:45.051 22:28:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1141722 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1141722 ']' 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1141722 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@947 -- # uname 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1141722 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1141722' 00:04:45.051 killing process with pid 1141722 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@961 -- # kill 1141722 00:04:45.051 22:28:28 event.cpu_locks -- common/autotest_common.sh@966 -- # wait 1141722 00:04:45.613 22:28:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1141860 ]] 00:04:45.613 22:28:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1141860 00:04:45.613 22:28:28 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1141860 ']' 00:04:45.613 22:28:28 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1141860 00:04:45.613 22:28:28 event.cpu_locks -- common/autotest_common.sh@947 -- # uname 00:04:45.613 22:28:28 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:45.613 22:28:28 event.cpu_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1141860 00:04:45.613 22:28:28 event.cpu_locks -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:04:45.613 22:28:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:04:45.613 22:28:28 event.cpu_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1141860' 00:04:45.613 killing process with pid 1141860 00:04:45.613 22:28:28 event.cpu_locks -- common/autotest_common.sh@961 -- # kill 1141860 00:04:45.613 22:28:28 event.cpu_locks -- common/autotest_common.sh@966 -- # wait 1141860 00:04:45.871 22:28:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:45.871 22:28:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:45.871 22:28:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1141722 ]] 00:04:45.871 22:28:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1141722 00:04:45.871 22:28:29 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1141722 ']' 00:04:45.871 22:28:29 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1141722 00:04:45.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1141722) - No such process 00:04:45.871 22:28:29 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'Process with pid 1141722 is not found' 00:04:45.871 Process with pid 1141722 is not found 00:04:45.871 22:28:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1141860 ]] 00:04:45.871 22:28:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1141860 00:04:45.871 22:28:29 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1141860 ']' 00:04:45.871 22:28:29 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1141860 00:04:45.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1141860) - No such process 00:04:45.871 22:28:29 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'Process with pid 1141860 is not found' 00:04:45.871 Process with pid 1141860 is not found 00:04:45.871 22:28:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:45.871 00:04:45.871 real 0m18.280s 00:04:45.871 user 0m32.674s 00:04:45.871 sys 0m5.436s 00:04:45.871 22:28:29 event.cpu_locks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:45.871 22:28:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.871 ************************************ 00:04:45.871 END TEST cpu_locks 00:04:45.871 ************************************ 00:04:46.128 22:28:29 event -- common/autotest_common.sh@1136 -- # return 0 00:04:46.128 00:04:46.129 real 0m42.381s 00:04:46.129 user 1m20.718s 00:04:46.129 sys 0m9.529s 00:04:46.129 22:28:29 event -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:46.129 22:28:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.129 ************************************ 00:04:46.129 END TEST event 00:04:46.129 ************************************ 00:04:46.129 22:28:29 -- common/autotest_common.sh@1136 -- # return 0 00:04:46.129 22:28:29 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:46.129 22:28:29 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:46.129 22:28:29 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:46.129 22:28:29 -- common/autotest_common.sh@10 -- # set +x 00:04:46.129 ************************************ 00:04:46.129 START TEST thread 00:04:46.129 ************************************ 00:04:46.129 22:28:29 thread -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:46.129 * Looking for test storage... 00:04:46.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:46.129 22:28:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:46.129 22:28:29 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:04:46.129 22:28:29 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:46.129 22:28:29 thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.129 ************************************ 00:04:46.129 START TEST thread_poller_perf 00:04:46.129 ************************************ 00:04:46.129 22:28:29 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:46.129 [2024-07-15 22:28:29.506599] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:46.129 [2024-07-15 22:28:29.506667] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142225 ] 00:04:46.129 [2024-07-15 22:28:29.569746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.386 [2024-07-15 22:28:29.690271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.386 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:47.317 ====================================== 00:04:47.317 busy:2713819659 (cyc) 00:04:47.317 total_run_count: 296000 00:04:47.317 tsc_hz: 2700000000 (cyc) 00:04:47.318 ====================================== 00:04:47.318 poller_cost: 9168 (cyc), 3395 (nsec) 00:04:47.575 00:04:47.575 real 0m1.330s 00:04:47.575 user 0m1.244s 00:04:47.575 sys 0m0.080s 00:04:47.575 22:28:30 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:47.575 22:28:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.575 ************************************ 00:04:47.575 END TEST thread_poller_perf 00:04:47.575 ************************************ 00:04:47.575 22:28:30 thread -- common/autotest_common.sh@1136 -- # return 0 00:04:47.575 22:28:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.575 22:28:30 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:04:47.575 22:28:30 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:47.575 22:28:30 thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.575 ************************************ 00:04:47.575 START TEST thread_poller_perf 00:04:47.575 ************************************ 00:04:47.575 22:28:30 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.575 [2024-07-15 22:28:30.887280] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:47.575 [2024-07-15 22:28:30.887349] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142498 ] 00:04:47.575 [2024-07-15 22:28:30.952169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.575 [2024-07-15 22:28:31.068835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.575 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:48.947 ====================================== 00:04:48.947 busy:2702523472 (cyc) 00:04:48.947 total_run_count: 3864000 00:04:48.947 tsc_hz: 2700000000 (cyc) 00:04:48.947 ====================================== 00:04:48.947 poller_cost: 699 (cyc), 258 (nsec) 00:04:48.947 00:04:48.947 real 0m1.321s 00:04:48.947 user 0m1.227s 00:04:48.947 sys 0m0.088s 00:04:48.947 22:28:32 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:48.947 22:28:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.947 ************************************ 00:04:48.947 END TEST thread_poller_perf 00:04:48.947 ************************************ 00:04:48.947 22:28:32 thread -- common/autotest_common.sh@1136 -- # return 0 00:04:48.947 22:28:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:48.947 00:04:48.947 real 0m2.801s 00:04:48.947 user 0m2.542s 00:04:48.947 sys 0m0.258s 00:04:48.947 22:28:32 thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:48.947 22:28:32 thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.947 ************************************ 00:04:48.947 END TEST thread 00:04:48.947 ************************************ 00:04:48.947 22:28:32 -- common/autotest_common.sh@1136 -- # return 0 00:04:48.947 22:28:32 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:48.947 22:28:32 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:48.947 22:28:32 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:48.947 22:28:32 -- common/autotest_common.sh@10 -- # set +x 00:04:48.947 ************************************ 00:04:48.947 START TEST accel 00:04:48.947 ************************************ 00:04:48.947 22:28:32 accel -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:48.947 * Looking for test storage... 00:04:48.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:48.947 22:28:32 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:48.947 22:28:32 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:48.947 22:28:32 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.947 22:28:32 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1142696 00:04:48.947 22:28:32 accel -- accel/accel.sh@63 -- # waitforlisten 1142696 00:04:48.947 22:28:32 accel -- common/autotest_common.sh@823 -- # '[' -z 1142696 ']' 00:04:48.947 22:28:32 accel -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.947 22:28:32 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:48.948 22:28:32 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:48.948 22:28:32 accel -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:48.948 22:28:32 accel -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.948 22:28:32 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:48.948 22:28:32 accel -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:48.948 22:28:32 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:48.948 22:28:32 accel -- common/autotest_common.sh@10 -- # set +x 00:04:48.948 22:28:32 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:48.948 22:28:32 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:48.948 22:28:32 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:48.948 22:28:32 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:48.948 22:28:32 accel -- accel/accel.sh@41 -- # jq -r . 00:04:48.948 [2024-07-15 22:28:32.366024] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:48.948 [2024-07-15 22:28:32.366109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142696 ] 00:04:48.948 [2024-07-15 22:28:32.424757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.205 [2024-07-15 22:28:32.541034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@856 -- # return 0 00:04:49.464 22:28:32 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:49.464 22:28:32 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:49.464 22:28:32 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:49.464 22:28:32 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:49.464 22:28:32 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:49.464 22:28:32 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:49.464 22:28:32 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.464 22:28:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.464 22:28:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.464 22:28:32 accel -- accel/accel.sh@75 -- # killprocess 1142696 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@942 -- # '[' -z 1142696 ']' 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@946 -- # kill -0 1142696 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@947 -- # uname 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1142696 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1142696' 00:04:49.464 killing process with pid 1142696 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@961 -- # kill 1142696 00:04:49.464 22:28:32 accel -- common/autotest_common.sh@966 -- # wait 1142696 00:04:50.075 22:28:33 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:50.075 22:28:33 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:50.075 22:28:33 accel -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:04:50.075 22:28:33 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:50.075 22:28:33 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.075 22:28:33 accel.accel_help -- common/autotest_common.sh@1117 -- # accel_perf -h 00:04:50.075 22:28:33 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:50.075 22:28:33 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:04:50.075 22:28:33 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.075 22:28:33 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.075 22:28:33 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.075 22:28:33 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.075 22:28:33 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.075 22:28:33 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:04:50.075 22:28:33 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:04:50.075 22:28:33 accel.accel_help -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:50.075 22:28:33 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:50.075 22:28:33 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:50.075 22:28:33 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:50.075 22:28:33 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:50.075 22:28:33 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:50.075 22:28:33 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.075 ************************************ 00:04:50.075 START TEST accel_missing_filename 00:04:50.075 ************************************ 00:04:50.075 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress 00:04:50.075 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # local es=0 00:04:50.075 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:50.075 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:04:50.075 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:50.075 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # type -t accel_perf 00:04:50.075 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:50.075 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress 00:04:50.075 22:28:33 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:50.075 22:28:33 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:50.075 22:28:33 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.075 22:28:33 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.075 22:28:33 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.075 22:28:33 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.075 22:28:33 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.075 22:28:33 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:50.075 22:28:33 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:50.075 [2024-07-15 22:28:33.443401] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:50.075 [2024-07-15 22:28:33.443455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142863 ] 00:04:50.075 [2024-07-15 22:28:33.505374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.332 [2024-07-15 22:28:33.622556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.332 [2024-07-15 22:28:33.684412] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.332 [2024-07-15 22:28:33.773049] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:50.589 A filename is required. 00:04:50.589 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # es=234 00:04:50.589 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:50.589 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@654 -- # es=106 00:04:50.589 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@655 -- # case "$es" in 00:04:50.589 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=1 00:04:50.589 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:50.589 00:04:50.589 real 0m0.471s 00:04:50.589 user 0m0.356s 00:04:50.589 sys 0m0.147s 00:04:50.589 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:50.589 22:28:33 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:50.589 ************************************ 00:04:50.589 END TEST accel_missing_filename 00:04:50.590 ************************************ 00:04:50.590 22:28:33 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:50.590 22:28:33 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.590 22:28:33 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:04:50.590 22:28:33 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:50.590 22:28:33 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.590 ************************************ 00:04:50.590 START TEST accel_compress_verify 00:04:50.590 ************************************ 00:04:50.590 22:28:33 accel.accel_compress_verify -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.590 22:28:33 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # local es=0 00:04:50.590 22:28:33 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.590 22:28:33 accel.accel_compress_verify -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:04:50.590 22:28:33 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:50.590 22:28:33 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # type -t accel_perf 00:04:50.590 22:28:33 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:50.590 22:28:33 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.590 22:28:33 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.590 22:28:33 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:50.590 22:28:33 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.590 22:28:33 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.590 22:28:33 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.590 22:28:33 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.590 22:28:33 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.590 22:28:33 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:50.590 22:28:33 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:50.590 [2024-07-15 22:28:33.963403] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:50.590 [2024-07-15 22:28:33.963464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142898 ] 00:04:50.590 [2024-07-15 22:28:34.025616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.848 [2024-07-15 22:28:34.144125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.848 [2024-07-15 22:28:34.202394] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.848 [2024-07-15 22:28:34.283690] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:51.107 00:04:51.107 Compression does not support the verify option, aborting. 00:04:51.107 22:28:34 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # es=161 00:04:51.107 22:28:34 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:51.107 22:28:34 accel.accel_compress_verify -- common/autotest_common.sh@654 -- # es=33 00:04:51.107 22:28:34 accel.accel_compress_verify -- common/autotest_common.sh@655 -- # case "$es" in 00:04:51.107 22:28:34 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=1 00:04:51.107 22:28:34 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:51.107 00:04:51.107 real 0m0.465s 00:04:51.107 user 0m0.348s 00:04:51.107 sys 0m0.151s 00:04:51.107 22:28:34 accel.accel_compress_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:51.107 22:28:34 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:51.107 ************************************ 00:04:51.107 END TEST accel_compress_verify 00:04:51.107 ************************************ 00:04:51.107 22:28:34 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:51.107 22:28:34 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:51.107 22:28:34 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:51.107 22:28:34 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:51.107 22:28:34 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.107 ************************************ 00:04:51.107 START TEST accel_wrong_workload 00:04:51.107 ************************************ 00:04:51.107 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w foobar 00:04:51.107 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # local es=0 00:04:51.107 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:51.107 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:04:51.107 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:51.107 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # type -t accel_perf 00:04:51.108 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:51.108 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w foobar 00:04:51.108 22:28:34 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:51.108 22:28:34 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:04:51.108 22:28:34 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.108 22:28:34 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.108 22:28:34 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.108 22:28:34 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.108 22:28:34 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.108 22:28:34 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:04:51.108 22:28:34 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:04:51.108 Unsupported workload type: foobar 00:04:51.108 [2024-07-15 22:28:34.468335] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:51.108 accel_perf options: 00:04:51.108 [-h help message] 00:04:51.108 [-q queue depth per core] 00:04:51.108 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:51.108 [-T number of threads per core 00:04:51.108 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:51.108 [-t time in seconds] 00:04:51.108 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:51.108 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:51.108 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:51.108 [-l for compress/decompress workloads, name of uncompressed input file 00:04:51.108 [-S for crc32c workload, use this seed value (default 0) 00:04:51.108 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:51.108 [-f for fill workload, use this BYTE value (default 255) 00:04:51.108 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:51.108 [-y verify result if this switch is on] 00:04:51.108 [-a tasks to allocate per core (default: same value as -q)] 00:04:51.108 Can be used to spread operations across a wider range of memory. 00:04:51.108 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # es=1 00:04:51.108 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:51.108 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:51.108 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:51.108 00:04:51.108 real 0m0.020s 00:04:51.108 user 0m0.012s 00:04:51.108 sys 0m0.008s 00:04:51.108 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:51.108 22:28:34 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:51.108 ************************************ 00:04:51.108 END TEST accel_wrong_workload 00:04:51.108 ************************************ 00:04:51.108 Error: writing output failed: Broken pipe 00:04:51.108 22:28:34 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:51.108 22:28:34 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:51.108 22:28:34 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:04:51.108 22:28:34 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:51.108 22:28:34 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.108 ************************************ 00:04:51.108 START TEST accel_negative_buffers 00:04:51.108 ************************************ 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # local es=0 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # type -t accel_perf 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w xor -y -x -1 00:04:51.108 22:28:34 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:51.108 22:28:34 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:04:51.108 22:28:34 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.108 22:28:34 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.108 22:28:34 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.108 22:28:34 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.108 22:28:34 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.108 22:28:34 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:04:51.108 22:28:34 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:04:51.108 -x option must be non-negative. 00:04:51.108 [2024-07-15 22:28:34.536914] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:51.108 accel_perf options: 00:04:51.108 [-h help message] 00:04:51.108 [-q queue depth per core] 00:04:51.108 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:51.108 [-T number of threads per core 00:04:51.108 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:51.108 [-t time in seconds] 00:04:51.108 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:51.108 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:51.108 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:51.108 [-l for compress/decompress workloads, name of uncompressed input file 00:04:51.108 [-S for crc32c workload, use this seed value (default 0) 00:04:51.108 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:51.108 [-f for fill workload, use this BYTE value (default 255) 00:04:51.108 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:51.108 [-y verify result if this switch is on] 00:04:51.108 [-a tasks to allocate per core (default: same value as -q)] 00:04:51.108 Can be used to spread operations across a wider range of memory. 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # es=1 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:51.108 00:04:51.108 real 0m0.025s 00:04:51.108 user 0m0.014s 00:04:51.108 sys 0m0.011s 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:51.108 22:28:34 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:51.108 ************************************ 00:04:51.108 END TEST accel_negative_buffers 00:04:51.108 ************************************ 00:04:51.108 Error: writing output failed: Broken pipe 00:04:51.108 22:28:34 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:51.108 22:28:34 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:51.108 22:28:34 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:04:51.108 22:28:34 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:51.108 22:28:34 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.108 ************************************ 00:04:51.108 START TEST accel_crc32c 00:04:51.108 ************************************ 00:04:51.108 22:28:34 accel.accel_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:51.108 22:28:34 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:51.108 [2024-07-15 22:28:34.602837] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:51.108 [2024-07-15 22:28:34.602913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143078 ] 00:04:51.367 [2024-07-15 22:28:34.668423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.367 [2024-07-15 22:28:34.789522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.367 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.367 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.367 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.367 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.367 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.367 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.367 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.367 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.367 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:51.367 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.368 22:28:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:52.741 22:28:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:52.741 00:04:52.741 real 0m1.466s 00:04:52.741 user 0m1.317s 00:04:52.741 sys 0m0.151s 00:04:52.741 22:28:36 accel.accel_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:52.741 22:28:36 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:52.741 ************************************ 00:04:52.741 END TEST accel_crc32c 00:04:52.741 ************************************ 00:04:52.741 22:28:36 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:52.741 22:28:36 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:52.741 22:28:36 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:04:52.741 22:28:36 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:52.741 22:28:36 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.741 ************************************ 00:04:52.741 START TEST accel_crc32c_C2 00:04:52.741 ************************************ 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:52.742 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:52.742 [2024-07-15 22:28:36.113094] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:52.742 [2024-07-15 22:28:36.113156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143242 ] 00:04:52.742 [2024-07-15 22:28:36.174479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.000 [2024-07-15 22:28:36.296842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.000 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.001 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.001 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.001 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.001 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.001 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.001 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.001 22:28:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.373 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.374 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.374 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:54.374 22:28:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.374 00:04:54.374 real 0m1.479s 00:04:54.374 user 0m1.334s 00:04:54.374 sys 0m0.147s 00:04:54.374 22:28:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:54.374 22:28:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:54.374 ************************************ 00:04:54.374 END TEST accel_crc32c_C2 00:04:54.374 ************************************ 00:04:54.374 22:28:37 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:54.374 22:28:37 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:54.374 22:28:37 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:54.374 22:28:37 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:54.374 22:28:37 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.374 ************************************ 00:04:54.374 START TEST accel_copy 00:04:54.374 ************************************ 00:04:54.374 22:28:37 accel.accel_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy -y 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:54.374 [2024-07-15 22:28:37.638033] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:54.374 [2024-07-15 22:28:37.638091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143473 ] 00:04:54.374 [2024-07-15 22:28:37.695339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.374 [2024-07-15 22:28:37.798621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.374 22:28:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.747 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:55.748 22:28:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:55.748 00:04:55.748 real 0m1.445s 00:04:55.748 user 0m1.309s 00:04:55.748 sys 0m0.136s 00:04:55.748 22:28:39 accel.accel_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:55.748 22:28:39 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:55.748 ************************************ 00:04:55.748 END TEST accel_copy 00:04:55.748 ************************************ 00:04:55.748 22:28:39 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:55.748 22:28:39 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.748 22:28:39 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:04:55.748 22:28:39 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:55.748 22:28:39 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.748 ************************************ 00:04:55.748 START TEST accel_fill 00:04:55.748 ************************************ 00:04:55.748 22:28:39 accel.accel_fill -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:55.748 22:28:39 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:55.748 [2024-07-15 22:28:39.129159] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:55.748 [2024-07-15 22:28:39.129221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143672 ] 00:04:55.748 [2024-07-15 22:28:39.191395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.006 [2024-07-15 22:28:39.310069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:56.006 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.007 22:28:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.380 22:28:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.380 22:28:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.380 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.380 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.380 22:28:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.380 22:28:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:57.381 22:28:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.381 00:04:57.381 real 0m1.457s 00:04:57.381 user 0m1.323s 00:04:57.381 sys 0m0.136s 00:04:57.381 22:28:40 accel.accel_fill -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:57.381 22:28:40 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:57.381 ************************************ 00:04:57.381 END TEST accel_fill 00:04:57.381 ************************************ 00:04:57.381 22:28:40 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:57.381 22:28:40 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:57.381 22:28:40 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:57.381 22:28:40 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:57.381 22:28:40 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.381 ************************************ 00:04:57.381 START TEST accel_copy_crc32c 00:04:57.381 ************************************ 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:57.381 [2024-07-15 22:28:40.631989] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:57.381 [2024-07-15 22:28:40.632058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143831 ] 00:04:57.381 [2024-07-15 22:28:40.696134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.381 [2024-07-15 22:28:40.814291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.381 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.639 22:28:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:59.014 00:04:59.014 real 0m1.478s 00:04:59.014 user 0m1.329s 00:04:59.014 sys 0m0.151s 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:59.014 22:28:42 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:59.014 ************************************ 00:04:59.014 END TEST accel_copy_crc32c 00:04:59.014 ************************************ 00:04:59.014 22:28:42 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:59.014 22:28:42 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:59.014 22:28:42 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:04:59.014 22:28:42 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:59.014 22:28:42 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.014 ************************************ 00:04:59.014 START TEST accel_copy_crc32c_C2 00:04:59.014 ************************************ 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:59.014 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:59.014 [2024-07-15 22:28:42.155794] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:04:59.014 [2024-07-15 22:28:42.155857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144022 ] 00:04:59.014 [2024-07-15 22:28:42.212817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.015 [2024-07-15 22:28:42.317452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.015 22:28:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.387 00:05:00.387 real 0m1.435s 00:05:00.387 user 0m1.300s 00:05:00.387 sys 0m0.137s 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:00.387 22:28:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:00.387 ************************************ 00:05:00.387 END TEST accel_copy_crc32c_C2 00:05:00.387 ************************************ 00:05:00.387 22:28:43 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:00.387 22:28:43 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:00.387 22:28:43 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:00.387 22:28:43 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:00.387 22:28:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.387 ************************************ 00:05:00.387 START TEST accel_dualcast 00:05:00.387 ************************************ 00:05:00.387 22:28:43 accel.accel_dualcast -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dualcast -y 00:05:00.387 22:28:43 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:00.387 22:28:43 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:00.387 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.387 22:28:43 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:00.387 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.387 22:28:43 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:00.387 22:28:43 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:00.387 22:28:43 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:00.388 [2024-07-15 22:28:43.639083] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:00.388 [2024-07-15 22:28:43.639141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144256 ] 00:05:00.388 [2024-07-15 22:28:43.701288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.388 [2024-07-15 22:28:43.819271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.388 22:28:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.760 22:28:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.760 22:28:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:01.761 22:28:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.761 00:05:01.761 real 0m1.466s 00:05:01.761 user 0m1.326s 00:05:01.761 sys 0m0.142s 00:05:01.761 22:28:45 accel.accel_dualcast -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:01.761 22:28:45 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:01.761 ************************************ 00:05:01.761 END TEST accel_dualcast 00:05:01.761 ************************************ 00:05:01.761 22:28:45 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:01.761 22:28:45 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:01.761 22:28:45 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:01.761 22:28:45 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:01.761 22:28:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.761 ************************************ 00:05:01.761 START TEST accel_compare 00:05:01.761 ************************************ 00:05:01.761 22:28:45 accel.accel_compare -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compare -y 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:01.761 22:28:45 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:01.761 [2024-07-15 22:28:45.150296] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:01.761 [2024-07-15 22:28:45.150358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144423 ] 00:05:01.761 [2024-07-15 22:28:45.211984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.018 [2024-07-15 22:28:45.330717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.018 22:28:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:03.390 22:28:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.390 00:05:03.390 real 0m1.483s 00:05:03.390 user 0m1.340s 00:05:03.390 sys 0m0.146s 00:05:03.390 22:28:46 accel.accel_compare -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:03.390 22:28:46 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:03.390 ************************************ 00:05:03.390 END TEST accel_compare 00:05:03.390 ************************************ 00:05:03.390 22:28:46 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:03.390 22:28:46 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:03.390 22:28:46 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:03.390 22:28:46 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:03.390 22:28:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.390 ************************************ 00:05:03.390 START TEST accel_xor 00:05:03.390 ************************************ 00:05:03.390 22:28:46 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y 00:05:03.390 22:28:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:03.390 22:28:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:03.390 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.390 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.390 22:28:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:03.390 22:28:46 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:03.390 22:28:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:03.390 22:28:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.390 22:28:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.390 22:28:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.391 22:28:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.391 22:28:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.391 22:28:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:03.391 22:28:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:03.391 [2024-07-15 22:28:46.681660] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:03.391 [2024-07-15 22:28:46.681725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144582 ] 00:05:03.391 [2024-07-15 22:28:46.743625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.391 [2024-07-15 22:28:46.865373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.649 22:28:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.016 00:05:05.016 real 0m1.486s 00:05:05.016 user 0m1.338s 00:05:05.016 sys 0m0.150s 00:05:05.016 22:28:48 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:05.016 22:28:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:05.016 ************************************ 00:05:05.016 END TEST accel_xor 00:05:05.016 ************************************ 00:05:05.016 22:28:48 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:05.016 22:28:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:05.016 22:28:48 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:05:05.016 22:28:48 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:05.016 22:28:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.016 ************************************ 00:05:05.016 START TEST accel_xor 00:05:05.016 ************************************ 00:05:05.016 22:28:48 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y -x 3 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:05.016 [2024-07-15 22:28:48.216073] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:05.016 [2024-07-15 22:28:48.216136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144848 ] 00:05:05.016 [2024-07-15 22:28:48.277790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.016 [2024-07-15 22:28:48.403308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.016 22:28:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:06.415 22:28:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.415 00:05:06.415 real 0m1.480s 00:05:06.415 user 0m1.338s 00:05:06.415 sys 0m0.144s 00:05:06.415 22:28:49 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:06.415 22:28:49 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:06.415 ************************************ 00:05:06.415 END TEST accel_xor 00:05:06.415 ************************************ 00:05:06.415 22:28:49 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:06.415 22:28:49 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:06.415 22:28:49 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:05:06.415 22:28:49 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:06.415 22:28:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.415 ************************************ 00:05:06.415 START TEST accel_dif_verify 00:05:06.415 ************************************ 00:05:06.415 22:28:49 accel.accel_dif_verify -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_verify 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:06.415 22:28:49 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:06.415 [2024-07-15 22:28:49.740768] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:06.416 [2024-07-15 22:28:49.740832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145014 ] 00:05:06.416 [2024-07-15 22:28:49.806448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.672 [2024-07-15 22:28:49.929920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.673 22:28:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:08.063 22:28:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.063 00:05:08.063 real 0m1.494s 00:05:08.063 user 0m1.337s 00:05:08.063 sys 0m0.160s 00:05:08.063 22:28:51 accel.accel_dif_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:08.063 22:28:51 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:08.063 ************************************ 00:05:08.063 END TEST accel_dif_verify 00:05:08.063 ************************************ 00:05:08.063 22:28:51 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:08.063 22:28:51 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:08.063 22:28:51 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:05:08.063 22:28:51 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:08.063 22:28:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.063 ************************************ 00:05:08.063 START TEST accel_dif_generate 00:05:08.063 ************************************ 00:05:08.063 22:28:51 accel.accel_dif_generate -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate 00:05:08.063 22:28:51 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:08.063 22:28:51 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:08.063 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:08.064 [2024-07-15 22:28:51.282339] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:08.064 [2024-07-15 22:28:51.282403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145204 ] 00:05:08.064 [2024-07-15 22:28:51.347363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.064 [2024-07-15 22:28:51.470618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.064 22:28:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.435 22:28:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.436 22:28:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.436 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.436 22:28:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.436 22:28:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.436 22:28:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:09.436 22:28:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.436 00:05:09.436 real 0m1.492s 00:05:09.436 user 0m1.343s 00:05:09.436 sys 0m0.153s 00:05:09.436 22:28:52 accel.accel_dif_generate -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:09.436 22:28:52 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:09.436 ************************************ 00:05:09.436 END TEST accel_dif_generate 00:05:09.436 ************************************ 00:05:09.436 22:28:52 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:09.436 22:28:52 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:09.436 22:28:52 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:05:09.436 22:28:52 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:09.436 22:28:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.436 ************************************ 00:05:09.436 START TEST accel_dif_generate_copy 00:05:09.436 ************************************ 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate_copy 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:09.436 22:28:52 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:09.436 [2024-07-15 22:28:52.819000] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:09.436 [2024-07-15 22:28:52.819065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145439 ] 00:05:09.436 [2024-07-15 22:28:52.879969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.694 [2024-07-15 22:28:53.001488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.694 22:28:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.068 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.068 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.068 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.068 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.068 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.068 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.068 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.068 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.068 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.068 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.069 00:05:11.069 real 0m1.469s 00:05:11.069 user 0m1.334s 00:05:11.069 sys 0m0.137s 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:11.069 22:28:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:11.069 ************************************ 00:05:11.069 END TEST accel_dif_generate_copy 00:05:11.069 ************************************ 00:05:11.069 22:28:54 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:11.069 22:28:54 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:11.069 22:28:54 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.069 22:28:54 accel -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:05:11.069 22:28:54 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:11.069 22:28:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.069 ************************************ 00:05:11.069 START TEST accel_comp 00:05:11.069 ************************************ 00:05:11.069 22:28:54 accel.accel_comp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:11.069 22:28:54 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:11.069 [2024-07-15 22:28:54.341396] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:11.069 [2024-07-15 22:28:54.341461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145606 ] 00:05:11.069 [2024-07-15 22:28:54.404666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.069 [2024-07-15 22:28:54.526152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.327 22:28:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:12.700 22:28:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.700 00:05:12.700 real 0m1.492s 00:05:12.700 user 0m1.346s 00:05:12.700 sys 0m0.149s 00:05:12.700 22:28:55 accel.accel_comp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:12.700 22:28:55 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:12.700 ************************************ 00:05:12.700 END TEST accel_comp 00:05:12.700 ************************************ 00:05:12.700 22:28:55 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:12.700 22:28:55 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:12.700 22:28:55 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:05:12.700 22:28:55 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:12.700 22:28:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.700 ************************************ 00:05:12.700 START TEST accel_decomp 00:05:12.700 ************************************ 00:05:12.700 22:28:55 accel.accel_decomp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:12.700 22:28:55 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:12.700 [2024-07-15 22:28:55.876961] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:12.700 [2024-07-15 22:28:55.877028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145854 ] 00:05:12.700 [2024-07-15 22:28:55.939778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.700 [2024-07-15 22:28:56.062915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.700 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 22:28:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:14.075 22:28:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.075 00:05:14.075 real 0m1.494s 00:05:14.075 user 0m1.351s 00:05:14.075 sys 0m0.147s 00:05:14.075 22:28:57 accel.accel_decomp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:14.075 22:28:57 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:14.075 ************************************ 00:05:14.075 END TEST accel_decomp 00:05:14.075 ************************************ 00:05:14.075 22:28:57 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:14.075 22:28:57 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.075 22:28:57 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:14.075 22:28:57 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:14.075 22:28:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.075 ************************************ 00:05:14.075 START TEST accel_decomp_full 00:05:14.075 ************************************ 00:05:14.075 22:28:57 accel.accel_decomp_full -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:14.075 22:28:57 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:14.075 [2024-07-15 22:28:57.420317] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:14.075 [2024-07-15 22:28:57.420387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146031 ] 00:05:14.075 [2024-07-15 22:28:57.484103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.334 [2024-07-15 22:28:57.606903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.334 22:28:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:15.709 22:28:58 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.709 00:05:15.709 real 0m1.499s 00:05:15.709 user 0m1.363s 00:05:15.709 sys 0m0.138s 00:05:15.709 22:28:58 accel.accel_decomp_full -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:15.709 22:28:58 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:15.709 ************************************ 00:05:15.709 END TEST accel_decomp_full 00:05:15.709 ************************************ 00:05:15.709 22:28:58 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:15.709 22:28:58 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:15.709 22:28:58 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:15.709 22:28:58 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:15.709 22:28:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.709 ************************************ 00:05:15.709 START TEST accel_decomp_mcore 00:05:15.709 ************************************ 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:15.709 22:28:58 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:15.709 [2024-07-15 22:28:58.965593] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:15.709 [2024-07-15 22:28:58.965658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146194 ] 00:05:15.709 [2024-07-15 22:28:59.028372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.709 [2024-07-15 22:28:59.155137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.709 [2024-07-15 22:28:59.155191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.709 [2024-07-15 22:28:59.155244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.709 [2024-07-15 22:28:59.155248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.969 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.970 22:28:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:17.348 22:29:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.348 00:05:17.349 real 0m1.500s 00:05:17.349 user 0m4.824s 00:05:17.349 sys 0m0.153s 00:05:17.349 22:29:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.349 22:29:00 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:17.349 ************************************ 00:05:17.349 END TEST accel_decomp_mcore 00:05:17.349 ************************************ 00:05:17.349 22:29:00 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:17.349 22:29:00 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:17.349 22:29:00 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:05:17.349 22:29:00 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:17.349 22:29:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.349 ************************************ 00:05:17.349 START TEST accel_decomp_full_mcore 00:05:17.349 ************************************ 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:17.349 [2024-07-15 22:29:00.511743] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:17.349 [2024-07-15 22:29:00.511807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146470 ] 00:05:17.349 [2024-07-15 22:29:00.576100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.349 [2024-07-15 22:29:00.700888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.349 [2024-07-15 22:29:00.700933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.349 [2024-07-15 22:29:00.700989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.349 [2024-07-15 22:29:00.700992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.349 22:29:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.730 00:05:18.730 real 0m1.516s 00:05:18.730 user 0m4.862s 00:05:18.730 sys 0m0.163s 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:18.730 22:29:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:18.730 ************************************ 00:05:18.730 END TEST accel_decomp_full_mcore 00:05:18.730 ************************************ 00:05:18.730 22:29:02 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:18.730 22:29:02 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:18.730 22:29:02 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:18.730 22:29:02 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:18.730 22:29:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.730 ************************************ 00:05:18.730 START TEST accel_decomp_mthread 00:05:18.730 ************************************ 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:18.730 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:18.730 [2024-07-15 22:29:02.075624] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:18.730 [2024-07-15 22:29:02.075692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146628 ] 00:05:18.730 [2024-07-15 22:29:02.138030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.991 [2024-07-15 22:29:02.264637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.991 22:29:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.372 00:05:20.372 real 0m1.492s 00:05:20.372 user 0m1.347s 00:05:20.372 sys 0m0.147s 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:20.372 22:29:03 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:20.372 ************************************ 00:05:20.372 END TEST accel_decomp_mthread 00:05:20.372 ************************************ 00:05:20.372 22:29:03 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:20.372 22:29:03 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:20.372 22:29:03 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:05:20.372 22:29:03 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:20.372 22:29:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.372 ************************************ 00:05:20.372 START TEST accel_decomp_full_mthread 00:05:20.372 ************************************ 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:20.372 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:20.373 [2024-07-15 22:29:03.612498] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:20.373 [2024-07-15 22:29:03.612565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146793 ] 00:05:20.373 [2024-07-15 22:29:03.674378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.373 [2024-07-15 22:29:03.797642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.373 22:29:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.814 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:21.815 22:29:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.815 00:05:21.815 real 0m1.518s 00:05:21.815 user 0m1.378s 00:05:21.815 sys 0m0.142s 00:05:21.815 22:29:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:21.815 22:29:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:21.815 ************************************ 00:05:21.815 END TEST accel_decomp_full_mthread 00:05:21.815 ************************************ 00:05:21.815 22:29:05 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:21.815 22:29:05 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:21.815 22:29:05 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:21.815 22:29:05 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:21.815 22:29:05 accel -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:05:21.815 22:29:05 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.815 22:29:05 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:21.815 22:29:05 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.815 22:29:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.815 22:29:05 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.815 22:29:05 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.815 22:29:05 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.815 22:29:05 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:21.815 22:29:05 accel -- accel/accel.sh@41 -- # jq -r . 00:05:21.815 ************************************ 00:05:21.815 START TEST accel_dif_functional_tests 00:05:21.815 ************************************ 00:05:21.815 22:29:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:21.815 [2024-07-15 22:29:05.201323] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:21.815 [2024-07-15 22:29:05.201383] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147070 ] 00:05:21.815 [2024-07-15 22:29:05.262298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.074 [2024-07-15 22:29:05.387348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.074 [2024-07-15 22:29:05.387399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.074 [2024-07-15 22:29:05.387403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.074 00:05:22.074 00:05:22.074 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.074 http://cunit.sourceforge.net/ 00:05:22.074 00:05:22.074 00:05:22.074 Suite: accel_dif 00:05:22.074 Test: verify: DIF generated, GUARD check ...passed 00:05:22.074 Test: verify: DIF generated, APPTAG check ...passed 00:05:22.074 Test: verify: DIF generated, REFTAG check ...passed 00:05:22.074 Test: verify: DIF not generated, GUARD check ...[2024-07-15 22:29:05.489693] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:22.074 passed 00:05:22.074 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 22:29:05.489772] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:22.074 passed 00:05:22.074 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 22:29:05.489812] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:22.074 passed 00:05:22.074 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:22.074 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 22:29:05.489897] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:22.074 passed 00:05:22.074 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:22.074 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:22.074 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:22.074 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 22:29:05.490059] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:22.074 passed 00:05:22.074 Test: verify copy: DIF generated, GUARD check ...passed 00:05:22.074 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:22.074 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:22.074 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 22:29:05.490241] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:22.074 passed 00:05:22.074 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 22:29:05.490285] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:22.074 passed 00:05:22.074 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 22:29:05.490325] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:22.074 passed 00:05:22.074 Test: generate copy: DIF generated, GUARD check ...passed 00:05:22.074 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:22.074 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:22.074 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:22.074 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:22.074 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:22.074 Test: generate copy: iovecs-len validate ...[2024-07-15 22:29:05.490585] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:22.074 passed 00:05:22.074 Test: generate copy: buffer alignment validate ...passed 00:05:22.074 00:05:22.074 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.074 suites 1 1 n/a 0 0 00:05:22.074 tests 26 26 26 0 0 00:05:22.074 asserts 115 115 115 0 n/a 00:05:22.074 00:05:22.074 Elapsed time = 0.003 seconds 00:05:22.332 00:05:22.332 real 0m0.589s 00:05:22.332 user 0m0.886s 00:05:22.332 sys 0m0.187s 00:05:22.332 22:29:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:22.332 22:29:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:22.332 ************************************ 00:05:22.332 END TEST accel_dif_functional_tests 00:05:22.332 ************************************ 00:05:22.332 22:29:05 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:22.332 00:05:22.332 real 0m33.513s 00:05:22.332 user 0m36.973s 00:05:22.332 sys 0m4.618s 00:05:22.332 22:29:05 accel -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:22.332 22:29:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.332 ************************************ 00:05:22.332 END TEST accel 00:05:22.332 ************************************ 00:05:22.332 22:29:05 -- common/autotest_common.sh@1136 -- # return 0 00:05:22.332 22:29:05 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:22.332 22:29:05 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:22.332 22:29:05 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:22.332 22:29:05 -- common/autotest_common.sh@10 -- # set +x 00:05:22.332 ************************************ 00:05:22.332 START TEST accel_rpc 00:05:22.332 ************************************ 00:05:22.332 22:29:05 accel_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:22.590 * Looking for test storage... 00:05:22.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:22.590 22:29:05 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.590 22:29:05 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1147142 00:05:22.590 22:29:05 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:22.590 22:29:05 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1147142 00:05:22.590 22:29:05 accel_rpc -- common/autotest_common.sh@823 -- # '[' -z 1147142 ']' 00:05:22.590 22:29:05 accel_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.590 22:29:05 accel_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:22.590 22:29:05 accel_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.590 22:29:05 accel_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:22.590 22:29:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.590 [2024-07-15 22:29:05.920317] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:22.590 [2024-07-15 22:29:05.920410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147142 ] 00:05:22.590 [2024-07-15 22:29:05.976768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.590 [2024-07-15 22:29:06.083395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.850 22:29:06 accel_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:22.850 22:29:06 accel_rpc -- common/autotest_common.sh@856 -- # return 0 00:05:22.850 22:29:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:22.850 22:29:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:22.850 22:29:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:22.850 22:29:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:22.850 22:29:06 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:22.850 22:29:06 accel_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:22.850 22:29:06 accel_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:22.850 22:29:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.850 ************************************ 00:05:22.850 START TEST accel_assign_opcode 00:05:22.850 ************************************ 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1117 -- # accel_assign_opcode_test_suite 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:22.850 [2024-07-15 22:29:06.156046] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:22.850 [2024-07-15 22:29:06.164055] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:22.850 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:23.110 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:23.110 22:29:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:23.110 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:23.110 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:23.110 22:29:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:23.110 22:29:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:23.110 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:23.110 software 00:05:23.110 00:05:23.110 real 0m0.303s 00:05:23.110 user 0m0.038s 00:05:23.110 sys 0m0.009s 00:05:23.110 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:23.110 22:29:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:23.110 ************************************ 00:05:23.110 END TEST accel_assign_opcode 00:05:23.110 ************************************ 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:23.110 22:29:06 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1147142 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@942 -- # '[' -z 1147142 ']' 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@946 -- # kill -0 1147142 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@947 -- # uname 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1147142 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1147142' 00:05:23.110 killing process with pid 1147142 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@961 -- # kill 1147142 00:05:23.110 22:29:06 accel_rpc -- common/autotest_common.sh@966 -- # wait 1147142 00:05:23.677 00:05:23.677 real 0m1.153s 00:05:23.677 user 0m1.088s 00:05:23.677 sys 0m0.431s 00:05:23.677 22:29:06 accel_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:23.677 22:29:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.677 ************************************ 00:05:23.677 END TEST accel_rpc 00:05:23.677 ************************************ 00:05:23.677 22:29:06 -- common/autotest_common.sh@1136 -- # return 0 00:05:23.677 22:29:06 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:23.677 22:29:06 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:23.677 22:29:06 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:23.677 22:29:06 -- common/autotest_common.sh@10 -- # set +x 00:05:23.677 ************************************ 00:05:23.677 START TEST app_cmdline 00:05:23.677 ************************************ 00:05:23.677 22:29:07 app_cmdline -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:23.677 * Looking for test storage... 00:05:23.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:23.677 22:29:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:23.677 22:29:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1147346 00:05:23.677 22:29:07 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:23.677 22:29:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1147346 00:05:23.677 22:29:07 app_cmdline -- common/autotest_common.sh@823 -- # '[' -z 1147346 ']' 00:05:23.677 22:29:07 app_cmdline -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.677 22:29:07 app_cmdline -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:23.677 22:29:07 app_cmdline -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.677 22:29:07 app_cmdline -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:23.677 22:29:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.677 [2024-07-15 22:29:07.127543] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:23.677 [2024-07-15 22:29:07.127638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147346 ] 00:05:23.936 [2024-07-15 22:29:07.187965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.936 [2024-07-15 22:29:07.295736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.194 22:29:07 app_cmdline -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:24.194 22:29:07 app_cmdline -- common/autotest_common.sh@856 -- # return 0 00:05:24.194 22:29:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:24.452 { 00:05:24.452 "version": "SPDK v24.09-pre git sha1 958a93494", 00:05:24.452 "fields": { 00:05:24.452 "major": 24, 00:05:24.452 "minor": 9, 00:05:24.452 "patch": 0, 00:05:24.452 "suffix": "-pre", 00:05:24.452 "commit": "958a93494" 00:05:24.452 } 00:05:24.452 } 00:05:24.452 22:29:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:24.452 22:29:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:24.452 22:29:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:24.452 22:29:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:24.452 22:29:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:24.452 22:29:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:24.452 22:29:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:24.452 22:29:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:24.452 22:29:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:24.452 22:29:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@642 -- # local es=0 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:24.452 22:29:07 app_cmdline -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.709 request: 00:05:24.709 { 00:05:24.709 "method": "env_dpdk_get_mem_stats", 00:05:24.709 "req_id": 1 00:05:24.709 } 00:05:24.709 Got JSON-RPC error response 00:05:24.709 response: 00:05:24.709 { 00:05:24.709 "code": -32601, 00:05:24.709 "message": "Method not found" 00:05:24.709 } 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@645 -- # es=1 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:24.709 22:29:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1147346 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@942 -- # '[' -z 1147346 ']' 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@946 -- # kill -0 1147346 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@947 -- # uname 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1147346 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1147346' 00:05:24.709 killing process with pid 1147346 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@961 -- # kill 1147346 00:05:24.709 22:29:08 app_cmdline -- common/autotest_common.sh@966 -- # wait 1147346 00:05:25.273 00:05:25.273 real 0m1.579s 00:05:25.273 user 0m1.900s 00:05:25.273 sys 0m0.478s 00:05:25.273 22:29:08 app_cmdline -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:25.273 22:29:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:25.273 ************************************ 00:05:25.273 END TEST app_cmdline 00:05:25.273 ************************************ 00:05:25.273 22:29:08 -- common/autotest_common.sh@1136 -- # return 0 00:05:25.273 22:29:08 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:25.273 22:29:08 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:25.273 22:29:08 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:25.273 22:29:08 -- common/autotest_common.sh@10 -- # set +x 00:05:25.273 ************************************ 00:05:25.273 START TEST version 00:05:25.273 ************************************ 00:05:25.273 22:29:08 version -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:25.273 * Looking for test storage... 00:05:25.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:25.273 22:29:08 version -- app/version.sh@17 -- # get_header_version major 00:05:25.273 22:29:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:25.273 22:29:08 version -- app/version.sh@14 -- # cut -f2 00:05:25.273 22:29:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.273 22:29:08 version -- app/version.sh@17 -- # major=24 00:05:25.273 22:29:08 version -- app/version.sh@18 -- # get_header_version minor 00:05:25.273 22:29:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:25.273 22:29:08 version -- app/version.sh@14 -- # cut -f2 00:05:25.273 22:29:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.273 22:29:08 version -- app/version.sh@18 -- # minor=9 00:05:25.273 22:29:08 version -- app/version.sh@19 -- # get_header_version patch 00:05:25.273 22:29:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:25.273 22:29:08 version -- app/version.sh@14 -- # cut -f2 00:05:25.273 22:29:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.273 22:29:08 version -- app/version.sh@19 -- # patch=0 00:05:25.273 22:29:08 version -- app/version.sh@20 -- # get_header_version suffix 00:05:25.273 22:29:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:25.273 22:29:08 version -- app/version.sh@14 -- # cut -f2 00:05:25.273 22:29:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.273 22:29:08 version -- app/version.sh@20 -- # suffix=-pre 00:05:25.273 22:29:08 version -- app/version.sh@22 -- # version=24.9 00:05:25.273 22:29:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:25.273 22:29:08 version -- app/version.sh@28 -- # version=24.9rc0 00:05:25.273 22:29:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:25.273 22:29:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:25.273 22:29:08 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:25.273 22:29:08 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:25.273 00:05:25.273 real 0m0.110s 00:05:25.273 user 0m0.057s 00:05:25.273 sys 0m0.074s 00:05:25.273 22:29:08 version -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:25.273 22:29:08 version -- common/autotest_common.sh@10 -- # set +x 00:05:25.273 ************************************ 00:05:25.273 END TEST version 00:05:25.273 ************************************ 00:05:25.531 22:29:08 -- common/autotest_common.sh@1136 -- # return 0 00:05:25.531 22:29:08 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:25.531 22:29:08 -- spdk/autotest.sh@198 -- # uname -s 00:05:25.531 22:29:08 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:25.531 22:29:08 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:25.531 22:29:08 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:25.531 22:29:08 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:25.531 22:29:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:25.531 22:29:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:25.531 22:29:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.531 22:29:08 -- common/autotest_common.sh@10 -- # set +x 00:05:25.531 22:29:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:25.531 22:29:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:25.531 22:29:08 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:25.531 22:29:08 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:25.531 22:29:08 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:25.531 22:29:08 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:25.531 22:29:08 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:25.531 22:29:08 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:05:25.531 22:29:08 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:25.531 22:29:08 -- common/autotest_common.sh@10 -- # set +x 00:05:25.531 ************************************ 00:05:25.531 START TEST nvmf_tcp 00:05:25.531 ************************************ 00:05:25.531 22:29:08 nvmf_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:25.531 * Looking for test storage... 00:05:25.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.531 22:29:08 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.532 22:29:08 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.532 22:29:08 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.532 22:29:08 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.532 22:29:08 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.532 22:29:08 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.532 22:29:08 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.532 22:29:08 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:25.532 22:29:08 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:25.532 22:29:08 nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:25.532 22:29:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:25.532 22:29:08 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:25.532 22:29:08 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:05:25.532 22:29:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:25.532 22:29:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.532 ************************************ 00:05:25.532 START TEST nvmf_example 00:05:25.532 ************************************ 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:25.532 * Looking for test storage... 00:05:25.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:25.532 22:29:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:28.059 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:28.060 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:28.060 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:28.060 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:28.060 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:28.060 22:29:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:28.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:28.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:05:28.060 00:05:28.060 --- 10.0.0.2 ping statistics --- 00:05:28.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.060 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:28.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:28.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:05:28.060 00:05:28.060 --- 10.0.0.1 ping statistics --- 00:05:28.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.060 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1149358 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1149358 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@823 -- # '[' -z 1149358 ']' 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:28.060 22:29:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # return 0 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:28.994 22:29:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:39.025 Initializing NVMe Controllers 00:05:39.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:39.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:39.025 Initialization complete. Launching workers. 00:05:39.025 ======================================================== 00:05:39.025 Latency(us) 00:05:39.025 Device Information : IOPS MiB/s Average min max 00:05:39.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14931.68 58.33 4286.02 874.38 15222.30 00:05:39.025 ======================================================== 00:05:39.025 Total : 14931.68 58.33 4286.02 874.38 15222.30 00:05:39.025 00:05:39.284 22:29:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:39.284 22:29:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:39.285 rmmod nvme_tcp 00:05:39.285 rmmod nvme_fabrics 00:05:39.285 rmmod nvme_keyring 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1149358 ']' 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1149358 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@942 -- # '[' -z 1149358 ']' 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # kill -0 1149358 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # uname 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1149358 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # process_name=nvmf 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' nvmf = sudo ']' 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1149358' 00:05:39.285 killing process with pid 1149358 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@961 -- # kill 1149358 00:05:39.285 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # wait 1149358 00:05:39.543 nvmf threads initialize successfully 00:05:39.543 bdev subsystem init successfully 00:05:39.543 created a nvmf target service 00:05:39.543 create targets's poll groups done 00:05:39.543 all subsystems of target started 00:05:39.543 nvmf target is running 00:05:39.543 all subsystems of target stopped 00:05:39.543 destroy targets's poll groups done 00:05:39.543 destroyed the nvmf target service 00:05:39.543 bdev subsystem finish successfully 00:05:39.543 nvmf threads destroy successfully 00:05:39.543 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:39.543 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:39.543 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:39.543 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:39.543 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:39.543 22:29:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:39.543 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:39.543 22:29:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:41.449 22:29:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:41.449 22:29:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:41.449 22:29:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.449 22:29:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.449 00:05:41.449 real 0m16.032s 00:05:41.449 user 0m45.684s 00:05:41.449 sys 0m3.233s 00:05:41.449 22:29:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:41.710 22:29:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.710 ************************************ 00:05:41.710 END TEST nvmf_example 00:05:41.710 ************************************ 00:05:41.710 22:29:24 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:05:41.710 22:29:24 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:41.710 22:29:24 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:05:41.710 22:29:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:41.710 22:29:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.710 ************************************ 00:05:41.710 START TEST nvmf_filesystem 00:05:41.710 ************************************ 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:41.710 * Looking for test storage... 00:05:41.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:41.710 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:41.711 #define SPDK_CONFIG_H 00:05:41.711 #define SPDK_CONFIG_APPS 1 00:05:41.711 #define SPDK_CONFIG_ARCH native 00:05:41.711 #undef SPDK_CONFIG_ASAN 00:05:41.711 #undef SPDK_CONFIG_AVAHI 00:05:41.711 #undef SPDK_CONFIG_CET 00:05:41.711 #define SPDK_CONFIG_COVERAGE 1 00:05:41.711 #define SPDK_CONFIG_CROSS_PREFIX 00:05:41.711 #undef SPDK_CONFIG_CRYPTO 00:05:41.711 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:41.711 #undef SPDK_CONFIG_CUSTOMOCF 00:05:41.711 #undef SPDK_CONFIG_DAOS 00:05:41.711 #define SPDK_CONFIG_DAOS_DIR 00:05:41.711 #define SPDK_CONFIG_DEBUG 1 00:05:41.711 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:41.711 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:41.711 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:41.711 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:41.711 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:41.711 #undef SPDK_CONFIG_DPDK_UADK 00:05:41.711 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:41.711 #define SPDK_CONFIG_EXAMPLES 1 00:05:41.711 #undef SPDK_CONFIG_FC 00:05:41.711 #define SPDK_CONFIG_FC_PATH 00:05:41.711 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:41.711 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:41.711 #undef SPDK_CONFIG_FUSE 00:05:41.711 #undef SPDK_CONFIG_FUZZER 00:05:41.711 #define SPDK_CONFIG_FUZZER_LIB 00:05:41.711 #undef SPDK_CONFIG_GOLANG 00:05:41.711 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:41.711 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:41.711 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:41.711 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:41.711 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:41.711 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:41.711 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:41.711 #define SPDK_CONFIG_IDXD 1 00:05:41.711 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:41.711 #undef SPDK_CONFIG_IPSEC_MB 00:05:41.711 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:41.711 #define SPDK_CONFIG_ISAL 1 00:05:41.711 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:41.711 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:41.711 #define SPDK_CONFIG_LIBDIR 00:05:41.711 #undef SPDK_CONFIG_LTO 00:05:41.711 #define SPDK_CONFIG_MAX_LCORES 128 00:05:41.711 #define SPDK_CONFIG_NVME_CUSE 1 00:05:41.711 #undef SPDK_CONFIG_OCF 00:05:41.711 #define SPDK_CONFIG_OCF_PATH 00:05:41.711 #define SPDK_CONFIG_OPENSSL_PATH 00:05:41.711 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:41.711 #define SPDK_CONFIG_PGO_DIR 00:05:41.711 #undef SPDK_CONFIG_PGO_USE 00:05:41.711 #define SPDK_CONFIG_PREFIX /usr/local 00:05:41.711 #undef SPDK_CONFIG_RAID5F 00:05:41.711 #undef SPDK_CONFIG_RBD 00:05:41.711 #define SPDK_CONFIG_RDMA 1 00:05:41.711 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:41.711 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:41.711 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:41.711 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:41.711 #define SPDK_CONFIG_SHARED 1 00:05:41.711 #undef SPDK_CONFIG_SMA 00:05:41.711 #define SPDK_CONFIG_TESTS 1 00:05:41.711 #undef SPDK_CONFIG_TSAN 00:05:41.711 #define SPDK_CONFIG_UBLK 1 00:05:41.711 #define SPDK_CONFIG_UBSAN 1 00:05:41.711 #undef SPDK_CONFIG_UNIT_TESTS 00:05:41.711 #undef SPDK_CONFIG_URING 00:05:41.711 #define SPDK_CONFIG_URING_PATH 00:05:41.711 #undef SPDK_CONFIG_URING_ZNS 00:05:41.711 #undef SPDK_CONFIG_USDT 00:05:41.711 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:41.711 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:41.711 #define SPDK_CONFIG_VFIO_USER 1 00:05:41.711 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:41.711 #define SPDK_CONFIG_VHOST 1 00:05:41.711 #define SPDK_CONFIG_VIRTIO 1 00:05:41.711 #undef SPDK_CONFIG_VTUNE 00:05:41.711 #define SPDK_CONFIG_VTUNE_DIR 00:05:41.711 #define SPDK_CONFIG_WERROR 1 00:05:41.711 #define SPDK_CONFIG_WPDK_DIR 00:05:41.711 #undef SPDK_CONFIG_XNVME 00:05:41.711 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.711 22:29:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:05:41.712 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@273 -- # MAKE=make 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@274 -- # MAKEFLAGS=-j48 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@290 -- # export HUGEMEM=4096 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@290 -- # HUGEMEM=4096 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@292 -- # NO_HUGE=() 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@293 -- # TEST_MODE= 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@294 -- # for i in "$@" 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # case "$i" in 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # TEST_TRANSPORT=tcp 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@312 -- # [[ -z 1151072 ]] 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@312 -- # kill -0 1151072 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1674 -- # set_test_storage 2147483648 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@322 -- # [[ -v testdir ]] 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@324 -- # local requested_size=2147483648 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@325 -- # local mount target_dir 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # local -A mounts fss sizes avails uses 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # local source fs size avail mount use 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local storage_fallback storage_candidates 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # mktemp -udt spdk.XXXXXX 00:05:41.713 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # storage_fallback=/tmp/spdk.10PhEi 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.10PhEi/tests/target /tmp/spdk.10PhEi 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@352 -- # requested_size=2214592512 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@321 -- # df -T 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@321 -- # grep -v Filesystem 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=spdk_devtmpfs 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=devtmpfs 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=67108864 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=67108864 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=0 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=/dev/pmem0 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=ext2 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=953643008 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=5284429824 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4330786816 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=spdk_root 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=overlay 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=55507132416 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=61994692608 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=6487560192 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=30941708288 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=30997344256 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=55635968 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=12390178816 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=12398940160 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=8761344 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=30996066304 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=30997348352 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=1282048 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=6199463936 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=6199468032 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4096 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # printf '* Looking for test storage...\n' 00:05:41.714 * Looking for test storage... 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # local target_space new_size 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # for target_dir in "${storage_candidates[@]}" 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # mount=/ 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # target_space=55507132416 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # (( target_space == 0 || target_space < requested_size )) 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # (( target_space >= requested_size )) 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ overlay == tmpfs ]] 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ overlay == ramfs ]] 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ / == / ]] 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # new_size=8702152704 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@376 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@383 -- # return 0 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set -o errtrace 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # shopt -s extdebug 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # true 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # xtrace_fd 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.714 22:29:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:41.715 22:29:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:05:44.305 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:44.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:44.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:44.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:44.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:44.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:44.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:05:44.306 00:05:44.306 --- 10.0.0.2 ping statistics --- 00:05:44.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:44.306 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:44.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:44.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:05:44.306 00:05:44.306 --- 10.0.0.1 ping statistics --- 00:05:44.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:44.306 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.306 ************************************ 00:05:44.306 START TEST nvmf_filesystem_no_in_capsule 00:05:44.306 ************************************ 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1117 -- # nvmf_filesystem_part 0 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1152705 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1152705 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@823 -- # '[' -z 1152705 ']' 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:44.306 22:29:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:44.306 [2024-07-15 22:29:27.485779] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:44.306 [2024-07-15 22:29:27.485867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:44.306 [2024-07-15 22:29:27.555980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.306 [2024-07-15 22:29:27.681345] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:44.306 [2024-07-15 22:29:27.681427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:44.306 [2024-07-15 22:29:27.681452] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.306 [2024-07-15 22:29:27.681465] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.306 [2024-07-15 22:29:27.681482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:44.306 [2024-07-15 22:29:27.681576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.306 [2024-07-15 22:29:27.681631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.306 [2024-07-15 22:29:27.681700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.306 [2024-07-15 22:29:27.681702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # return 0 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.238 [2024-07-15 22:29:28.459083] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.238 Malloc1 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.238 [2024-07-15 22:29:28.645833] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1372 -- # local bdev_name=Malloc1 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1373 -- # local bdev_info 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bs 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local nb 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # bdev_info='[ 00:05:45.238 { 00:05:45.238 "name": "Malloc1", 00:05:45.238 "aliases": [ 00:05:45.238 "77344e5c-6f16-405f-85b8-af48e6982785" 00:05:45.238 ], 00:05:45.238 "product_name": "Malloc disk", 00:05:45.238 "block_size": 512, 00:05:45.238 "num_blocks": 1048576, 00:05:45.238 "uuid": "77344e5c-6f16-405f-85b8-af48e6982785", 00:05:45.238 "assigned_rate_limits": { 00:05:45.238 "rw_ios_per_sec": 0, 00:05:45.238 "rw_mbytes_per_sec": 0, 00:05:45.238 "r_mbytes_per_sec": 0, 00:05:45.238 "w_mbytes_per_sec": 0 00:05:45.238 }, 00:05:45.238 "claimed": true, 00:05:45.238 "claim_type": "exclusive_write", 00:05:45.238 "zoned": false, 00:05:45.238 "supported_io_types": { 00:05:45.238 "read": true, 00:05:45.238 "write": true, 00:05:45.238 "unmap": true, 00:05:45.238 "flush": true, 00:05:45.238 "reset": true, 00:05:45.238 "nvme_admin": false, 00:05:45.238 "nvme_io": false, 00:05:45.238 "nvme_io_md": false, 00:05:45.238 "write_zeroes": true, 00:05:45.238 "zcopy": true, 00:05:45.238 "get_zone_info": false, 00:05:45.238 "zone_management": false, 00:05:45.238 "zone_append": false, 00:05:45.238 "compare": false, 00:05:45.238 "compare_and_write": false, 00:05:45.238 "abort": true, 00:05:45.238 "seek_hole": false, 00:05:45.238 "seek_data": false, 00:05:45.238 "copy": true, 00:05:45.238 "nvme_iov_md": false 00:05:45.238 }, 00:05:45.238 "memory_domains": [ 00:05:45.238 { 00:05:45.238 "dma_device_id": "system", 00:05:45.238 "dma_device_type": 1 00:05:45.238 }, 00:05:45.238 { 00:05:45.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.238 "dma_device_type": 2 00:05:45.238 } 00:05:45.238 ], 00:05:45.238 "driver_specific": {} 00:05:45.238 } 00:05:45.238 ]' 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # jq '.[] .block_size' 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # bs=512 00:05:45.238 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # jq '.[] .num_blocks' 00:05:45.496 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # nb=1048576 00:05:45.497 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_size=512 00:05:45.497 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # echo 512 00:05:45.497 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:45.497 22:29:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:46.064 22:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:46.064 22:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1192 -- # local i=0 00:05:46.064 22:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:05:46.064 22:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:05:46.064 22:29:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # sleep 2 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # return 0 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:47.963 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:48.221 22:29:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:05:49.189 22:29:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:05:50.124 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:50.124 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:50.124 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:05:50.124 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:50.124 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:50.383 ************************************ 00:05:50.383 START TEST filesystem_ext4 00:05:50.383 ************************************ 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@918 -- # local fstype=ext4 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@920 -- # local i=0 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@921 -- # local force 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # '[' ext4 = ext4 ']' 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # force=-F 00:05:50.383 22:29:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:50.383 mke2fs 1.46.5 (30-Dec-2021) 00:05:50.383 Discarding device blocks: 0/522240 done 00:05:50.383 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:50.383 Filesystem UUID: 23f96767-ba36-4a8e-95ba-7f3ce203209a 00:05:50.383 Superblock backups stored on blocks: 00:05:50.383 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:50.383 00:05:50.383 Allocating group tables: 0/64 done 00:05:50.383 Writing inode tables: 0/64 done 00:05:50.643 Creating journal (8192 blocks): done 00:05:51.582 Writing superblocks and filesystem accounting information: 0/64 done 00:05:51.582 00:05:51.582 22:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # return 0 00:05:51.582 22:29:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:52.151 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:52.151 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:05:52.151 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:52.151 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:05:52.151 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:05:52.151 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1152705 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:52.410 00:05:52.410 real 0m2.025s 00:05:52.410 user 0m0.016s 00:05:52.410 sys 0m0.056s 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:05:52.410 ************************************ 00:05:52.410 END TEST filesystem_ext4 00:05:52.410 ************************************ 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:52.410 ************************************ 00:05:52.410 START TEST filesystem_btrfs 00:05:52.410 ************************************ 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@918 -- # local fstype=btrfs 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@920 -- # local i=0 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@921 -- # local force 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # '[' btrfs = ext4 ']' 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # force=-f 00:05:52.410 22:29:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:52.670 btrfs-progs v6.6.2 00:05:52.670 See https://btrfs.readthedocs.io for more information. 00:05:52.670 00:05:52.670 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:52.670 NOTE: several default settings have changed in version 5.15, please make sure 00:05:52.670 this does not affect your deployments: 00:05:52.670 - DUP for metadata (-m dup) 00:05:52.670 - enabled no-holes (-O no-holes) 00:05:52.670 - enabled free-space-tree (-R free-space-tree) 00:05:52.670 00:05:52.670 Label: (null) 00:05:52.670 UUID: 2d2a467e-c736-4e4e-8a12-4b3306faae86 00:05:52.670 Node size: 16384 00:05:52.670 Sector size: 4096 00:05:52.670 Filesystem size: 510.00MiB 00:05:52.670 Block group profiles: 00:05:52.670 Data: single 8.00MiB 00:05:52.670 Metadata: DUP 32.00MiB 00:05:52.670 System: DUP 8.00MiB 00:05:52.670 SSD detected: yes 00:05:52.670 Zoned device: no 00:05:52.670 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:52.670 Runtime features: free-space-tree 00:05:52.670 Checksum: crc32c 00:05:52.670 Number of devices: 1 00:05:52.670 Devices: 00:05:52.670 ID SIZE PATH 00:05:52.670 1 510.00MiB /dev/nvme0n1p1 00:05:52.670 00:05:52.670 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # return 0 00:05:52.670 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1152705 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:53.605 00:05:53.605 real 0m1.210s 00:05:53.605 user 0m0.028s 00:05:53.605 sys 0m0.104s 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:53.605 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:05:53.605 ************************************ 00:05:53.605 END TEST filesystem_btrfs 00:05:53.605 ************************************ 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:53.606 ************************************ 00:05:53.606 START TEST filesystem_xfs 00:05:53.606 ************************************ 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create xfs nvme0n1 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@918 -- # local fstype=xfs 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@920 -- # local i=0 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@921 -- # local force 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # '[' xfs = ext4 ']' 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # force=-f 00:05:53.606 22:29:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:53.606 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:53.606 = sectsz=512 attr=2, projid32bit=1 00:05:53.606 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:53.606 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:53.606 data = bsize=4096 blocks=130560, imaxpct=25 00:05:53.606 = sunit=0 swidth=0 blks 00:05:53.606 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:53.606 log =internal log bsize=4096 blocks=16384, version=2 00:05:53.606 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:53.606 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:54.980 Discarding blocks...Done. 00:05:54.980 22:29:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # return 0 00:05:54.980 22:29:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:56.882 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:56.882 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:05:56.882 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:56.882 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:05:56.882 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:05:56.882 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1152705 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:57.140 00:05:57.140 real 0m3.447s 00:05:57.140 user 0m0.021s 00:05:57.140 sys 0m0.062s 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:05:57.140 ************************************ 00:05:57.140 END TEST filesystem_xfs 00:05:57.140 ************************************ 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:05:57.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1213 -- # local i=0 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # return 0 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1152705 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@942 -- # '[' -z 1152705 ']' 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # kill -0 1152705 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # uname 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:57.140 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1152705 00:05:57.399 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:57.399 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:57.399 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1152705' 00:05:57.399 killing process with pid 1152705 00:05:57.399 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@961 -- # kill 1152705 00:05:57.399 22:29:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # wait 1152705 00:05:57.657 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:05:57.657 00:05:57.657 real 0m13.715s 00:05:57.657 user 0m52.756s 00:05:57.657 sys 0m1.973s 00:05:57.657 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:57.657 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.657 ************************************ 00:05:57.657 END TEST nvmf_filesystem_no_in_capsule 00:05:57.657 ************************************ 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1136 -- # return 0 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.916 ************************************ 00:05:57.916 START TEST nvmf_filesystem_in_capsule 00:05:57.916 ************************************ 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1117 -- # nvmf_filesystem_part 4096 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.916 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1154588 00:05:57.917 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:57.917 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1154588 00:05:57.917 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@823 -- # '[' -z 1154588 ']' 00:05:57.917 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.917 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:57.917 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.917 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:57.917 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.917 [2024-07-15 22:29:41.242145] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:05:57.917 [2024-07-15 22:29:41.242242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:57.917 [2024-07-15 22:29:41.305253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.917 [2024-07-15 22:29:41.414862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:57.917 [2024-07-15 22:29:41.414952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:57.917 [2024-07-15 22:29:41.414980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:57.917 [2024-07-15 22:29:41.414992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:57.917 [2024-07-15 22:29:41.415001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:57.917 [2024-07-15 22:29:41.415055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.917 [2024-07-15 22:29:41.415114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.917 [2024-07-15 22:29:41.415181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.917 [2024-07-15 22:29:41.415186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # return 0 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.176 [2024-07-15 22:29:41.572805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:58.176 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.436 Malloc1 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.436 [2024-07-15 22:29:41.758329] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1372 -- # local bdev_name=Malloc1 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1373 -- # local bdev_info 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bs 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local nb 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:58.436 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # bdev_info='[ 00:05:58.436 { 00:05:58.436 "name": "Malloc1", 00:05:58.436 "aliases": [ 00:05:58.436 "a328d22b-13da-4c44-aeb7-a3e0f4ad4f02" 00:05:58.436 ], 00:05:58.436 "product_name": "Malloc disk", 00:05:58.436 "block_size": 512, 00:05:58.436 "num_blocks": 1048576, 00:05:58.436 "uuid": "a328d22b-13da-4c44-aeb7-a3e0f4ad4f02", 00:05:58.436 "assigned_rate_limits": { 00:05:58.436 "rw_ios_per_sec": 0, 00:05:58.436 "rw_mbytes_per_sec": 0, 00:05:58.436 "r_mbytes_per_sec": 0, 00:05:58.436 "w_mbytes_per_sec": 0 00:05:58.436 }, 00:05:58.436 "claimed": true, 00:05:58.436 "claim_type": "exclusive_write", 00:05:58.436 "zoned": false, 00:05:58.436 "supported_io_types": { 00:05:58.436 "read": true, 00:05:58.436 "write": true, 00:05:58.436 "unmap": true, 00:05:58.436 "flush": true, 00:05:58.436 "reset": true, 00:05:58.436 "nvme_admin": false, 00:05:58.436 "nvme_io": false, 00:05:58.436 "nvme_io_md": false, 00:05:58.436 "write_zeroes": true, 00:05:58.436 "zcopy": true, 00:05:58.436 "get_zone_info": false, 00:05:58.436 "zone_management": false, 00:05:58.436 "zone_append": false, 00:05:58.436 "compare": false, 00:05:58.436 "compare_and_write": false, 00:05:58.436 "abort": true, 00:05:58.436 "seek_hole": false, 00:05:58.436 "seek_data": false, 00:05:58.436 "copy": true, 00:05:58.436 "nvme_iov_md": false 00:05:58.436 }, 00:05:58.436 "memory_domains": [ 00:05:58.436 { 00:05:58.436 "dma_device_id": "system", 00:05:58.436 "dma_device_type": 1 00:05:58.436 }, 00:05:58.436 { 00:05:58.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.436 "dma_device_type": 2 00:05:58.436 } 00:05:58.436 ], 00:05:58.436 "driver_specific": {} 00:05:58.436 } 00:05:58.437 ]' 00:05:58.437 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # jq '.[] .block_size' 00:05:58.437 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # bs=512 00:05:58.437 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # jq '.[] .num_blocks' 00:05:58.437 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # nb=1048576 00:05:58.437 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_size=512 00:05:58.437 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # echo 512 00:05:58.437 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:58.437 22:29:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:59.050 22:29:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:59.051 22:29:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1192 -- # local i=0 00:05:59.051 22:29:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:05:59.051 22:29:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:05:59.051 22:29:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # sleep 2 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # return 0 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:01.588 22:29:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:02.156 22:29:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.092 ************************************ 00:06:03.092 START TEST filesystem_in_capsule_ext4 00:06:03.092 ************************************ 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@918 -- # local fstype=ext4 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@920 -- # local i=0 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@921 -- # local force 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # '[' ext4 = ext4 ']' 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # force=-F 00:06:03.092 22:29:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:03.092 mke2fs 1.46.5 (30-Dec-2021) 00:06:03.349 Discarding device blocks: 0/522240 done 00:06:03.349 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:03.349 Filesystem UUID: b53c1736-160b-4bf2-9558-20a99aa2df16 00:06:03.349 Superblock backups stored on blocks: 00:06:03.349 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:03.349 00:06:03.349 Allocating group tables: 0/64 done 00:06:03.349 Writing inode tables: 0/64 done 00:06:06.638 Creating journal (8192 blocks): done 00:06:06.638 Writing superblocks and filesystem accounting information: 0/64 done 00:06:06.638 00:06:06.638 22:29:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # return 0 00:06:06.638 22:29:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:06.638 22:29:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1154588 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:06.638 00:06:06.638 real 0m3.501s 00:06:06.638 user 0m0.015s 00:06:06.638 sys 0m0.065s 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:06.638 ************************************ 00:06:06.638 END TEST filesystem_in_capsule_ext4 00:06:06.638 ************************************ 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.638 ************************************ 00:06:06.638 START TEST filesystem_in_capsule_btrfs 00:06:06.638 ************************************ 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@918 -- # local fstype=btrfs 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@920 -- # local i=0 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@921 -- # local force 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # '[' btrfs = ext4 ']' 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # force=-f 00:06:06.638 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:07.203 btrfs-progs v6.6.2 00:06:07.203 See https://btrfs.readthedocs.io for more information. 00:06:07.203 00:06:07.203 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:07.203 NOTE: several default settings have changed in version 5.15, please make sure 00:06:07.203 this does not affect your deployments: 00:06:07.203 - DUP for metadata (-m dup) 00:06:07.203 - enabled no-holes (-O no-holes) 00:06:07.203 - enabled free-space-tree (-R free-space-tree) 00:06:07.203 00:06:07.203 Label: (null) 00:06:07.203 UUID: 3edb4b98-e2c3-41f0-8970-7c58c21b0de4 00:06:07.203 Node size: 16384 00:06:07.203 Sector size: 4096 00:06:07.203 Filesystem size: 510.00MiB 00:06:07.203 Block group profiles: 00:06:07.203 Data: single 8.00MiB 00:06:07.203 Metadata: DUP 32.00MiB 00:06:07.203 System: DUP 8.00MiB 00:06:07.203 SSD detected: yes 00:06:07.203 Zoned device: no 00:06:07.203 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:07.203 Runtime features: free-space-tree 00:06:07.203 Checksum: crc32c 00:06:07.203 Number of devices: 1 00:06:07.203 Devices: 00:06:07.203 ID SIZE PATH 00:06:07.203 1 510.00MiB /dev/nvme0n1p1 00:06:07.203 00:06:07.204 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # return 0 00:06:07.204 22:29:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1154588 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:08.142 00:06:08.142 real 0m1.243s 00:06:08.142 user 0m0.022s 00:06:08.142 sys 0m0.114s 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:08.142 ************************************ 00:06:08.142 END TEST filesystem_in_capsule_btrfs 00:06:08.142 ************************************ 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.142 ************************************ 00:06:08.142 START TEST filesystem_in_capsule_xfs 00:06:08.142 ************************************ 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create xfs nvme0n1 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@918 -- # local fstype=xfs 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@920 -- # local i=0 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@921 -- # local force 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # '[' xfs = ext4 ']' 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # force=-f 00:06:08.142 22:29:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:08.142 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:08.142 = sectsz=512 attr=2, projid32bit=1 00:06:08.142 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:08.142 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:08.142 data = bsize=4096 blocks=130560, imaxpct=25 00:06:08.142 = sunit=0 swidth=0 blks 00:06:08.142 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:08.142 log =internal log bsize=4096 blocks=16384, version=2 00:06:08.142 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:08.142 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:09.080 Discarding blocks...Done. 00:06:09.080 22:29:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # return 0 00:06:09.080 22:29:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1154588 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:11.613 00:06:11.613 real 0m3.443s 00:06:11.613 user 0m0.014s 00:06:11.613 sys 0m0.065s 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:11.613 ************************************ 00:06:11.613 END TEST filesystem_in_capsule_xfs 00:06:11.613 ************************************ 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:06:11.613 22:29:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:11.613 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:11.613 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:11.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1213 -- # local i=0 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # return 0 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:11.869 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1154588 00:06:11.870 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@942 -- # '[' -z 1154588 ']' 00:06:11.870 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # kill -0 1154588 00:06:11.870 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # uname 00:06:11.870 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:11.870 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1154588 00:06:11.870 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:11.870 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:11.870 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1154588' 00:06:11.870 killing process with pid 1154588 00:06:11.870 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@961 -- # kill 1154588 00:06:11.870 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # wait 1154588 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:12.435 00:06:12.435 real 0m14.551s 00:06:12.435 user 0m55.866s 00:06:12.435 sys 0m2.036s 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:12.435 ************************************ 00:06:12.435 END TEST nvmf_filesystem_in_capsule 00:06:12.435 ************************************ 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1136 -- # return 0 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:12.435 rmmod nvme_tcp 00:06:12.435 rmmod nvme_fabrics 00:06:12.435 rmmod nvme_keyring 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:12.435 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:12.436 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:12.436 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:12.436 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:12.436 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:12.436 22:29:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.436 22:29:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:12.436 22:29:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.964 22:29:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:14.964 00:06:14.964 real 0m32.861s 00:06:14.964 user 1m49.547s 00:06:14.964 sys 0m5.695s 00:06:14.964 22:29:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:14.964 22:29:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.964 ************************************ 00:06:14.964 END TEST nvmf_filesystem 00:06:14.964 ************************************ 00:06:14.964 22:29:57 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:06:14.964 22:29:57 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:14.964 22:29:57 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:14.964 22:29:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:14.964 22:29:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.964 ************************************ 00:06:14.964 START TEST nvmf_target_discovery 00:06:14.964 ************************************ 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:14.964 * Looking for test storage... 00:06:14.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:14.964 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:14.965 22:29:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:16.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:16.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:16.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:16.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.867 22:29:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.867 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.867 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.867 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:16.867 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.867 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.867 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.867 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:16.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:06:16.867 00:06:16.867 --- 10.0.0.2 ping statistics --- 00:06:16.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.867 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:06:16.868 00:06:16.868 --- 10.0.0.1 ping statistics --- 00:06:16.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.868 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1158421 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1158421 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@823 -- # '[' -z 1158421 ']' 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:16.868 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.868 [2024-07-15 22:30:00.154932] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:06:16.868 [2024-07-15 22:30:00.155024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.868 [2024-07-15 22:30:00.224976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.868 [2024-07-15 22:30:00.340959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.868 [2024-07-15 22:30:00.341007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.868 [2024-07-15 22:30:00.341027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.868 [2024-07-15 22:30:00.341040] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.868 [2024-07-15 22:30:00.341050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.868 [2024-07-15 22:30:00.341102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.868 [2024-07-15 22:30:00.341156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.868 [2024-07-15 22:30:00.341221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.868 [2024-07-15 22:30:00.341224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # return 0 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 [2024-07-15 22:30:00.503573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 Null1 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 [2024-07-15 22:30:00.543946] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 Null2 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 Null3 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 Null4 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.130 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:17.434 00:06:17.434 Discovery Log Number of Records 6, Generation counter 6 00:06:17.434 =====Discovery Log Entry 0====== 00:06:17.434 trtype: tcp 00:06:17.434 adrfam: ipv4 00:06:17.434 subtype: current discovery subsystem 00:06:17.434 treq: not required 00:06:17.434 portid: 0 00:06:17.434 trsvcid: 4420 00:06:17.434 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:17.434 traddr: 10.0.0.2 00:06:17.434 eflags: explicit discovery connections, duplicate discovery information 00:06:17.434 sectype: none 00:06:17.434 =====Discovery Log Entry 1====== 00:06:17.434 trtype: tcp 00:06:17.434 adrfam: ipv4 00:06:17.434 subtype: nvme subsystem 00:06:17.434 treq: not required 00:06:17.434 portid: 0 00:06:17.434 trsvcid: 4420 00:06:17.434 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:17.434 traddr: 10.0.0.2 00:06:17.434 eflags: none 00:06:17.434 sectype: none 00:06:17.434 =====Discovery Log Entry 2====== 00:06:17.434 trtype: tcp 00:06:17.434 adrfam: ipv4 00:06:17.434 subtype: nvme subsystem 00:06:17.434 treq: not required 00:06:17.434 portid: 0 00:06:17.434 trsvcid: 4420 00:06:17.434 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:17.434 traddr: 10.0.0.2 00:06:17.434 eflags: none 00:06:17.434 sectype: none 00:06:17.434 =====Discovery Log Entry 3====== 00:06:17.434 trtype: tcp 00:06:17.434 adrfam: ipv4 00:06:17.434 subtype: nvme subsystem 00:06:17.434 treq: not required 00:06:17.434 portid: 0 00:06:17.434 trsvcid: 4420 00:06:17.434 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:17.434 traddr: 10.0.0.2 00:06:17.434 eflags: none 00:06:17.434 sectype: none 00:06:17.434 =====Discovery Log Entry 4====== 00:06:17.434 trtype: tcp 00:06:17.434 adrfam: ipv4 00:06:17.434 subtype: nvme subsystem 00:06:17.434 treq: not required 00:06:17.434 portid: 0 00:06:17.434 trsvcid: 4420 00:06:17.434 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:17.434 traddr: 10.0.0.2 00:06:17.434 eflags: none 00:06:17.434 sectype: none 00:06:17.434 =====Discovery Log Entry 5====== 00:06:17.434 trtype: tcp 00:06:17.434 adrfam: ipv4 00:06:17.434 subtype: discovery subsystem referral 00:06:17.434 treq: not required 00:06:17.434 portid: 0 00:06:17.434 trsvcid: 4430 00:06:17.434 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:17.434 traddr: 10.0.0.2 00:06:17.434 eflags: none 00:06:17.434 sectype: none 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:17.434 Perform nvmf subsystem discovery via RPC 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.434 [ 00:06:17.434 { 00:06:17.434 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:17.434 "subtype": "Discovery", 00:06:17.434 "listen_addresses": [ 00:06:17.434 { 00:06:17.434 "trtype": "TCP", 00:06:17.434 "adrfam": "IPv4", 00:06:17.434 "traddr": "10.0.0.2", 00:06:17.434 "trsvcid": "4420" 00:06:17.434 } 00:06:17.434 ], 00:06:17.434 "allow_any_host": true, 00:06:17.434 "hosts": [] 00:06:17.434 }, 00:06:17.434 { 00:06:17.434 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:17.434 "subtype": "NVMe", 00:06:17.434 "listen_addresses": [ 00:06:17.434 { 00:06:17.434 "trtype": "TCP", 00:06:17.434 "adrfam": "IPv4", 00:06:17.434 "traddr": "10.0.0.2", 00:06:17.434 "trsvcid": "4420" 00:06:17.434 } 00:06:17.434 ], 00:06:17.434 "allow_any_host": true, 00:06:17.434 "hosts": [], 00:06:17.434 "serial_number": "SPDK00000000000001", 00:06:17.434 "model_number": "SPDK bdev Controller", 00:06:17.434 "max_namespaces": 32, 00:06:17.434 "min_cntlid": 1, 00:06:17.434 "max_cntlid": 65519, 00:06:17.434 "namespaces": [ 00:06:17.434 { 00:06:17.434 "nsid": 1, 00:06:17.434 "bdev_name": "Null1", 00:06:17.434 "name": "Null1", 00:06:17.434 "nguid": "7CFE7BB847AB46CAB5AF520F8A29A0EB", 00:06:17.434 "uuid": "7cfe7bb8-47ab-46ca-b5af-520f8a29a0eb" 00:06:17.434 } 00:06:17.434 ] 00:06:17.434 }, 00:06:17.434 { 00:06:17.434 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:17.434 "subtype": "NVMe", 00:06:17.434 "listen_addresses": [ 00:06:17.434 { 00:06:17.434 "trtype": "TCP", 00:06:17.434 "adrfam": "IPv4", 00:06:17.434 "traddr": "10.0.0.2", 00:06:17.434 "trsvcid": "4420" 00:06:17.434 } 00:06:17.434 ], 00:06:17.434 "allow_any_host": true, 00:06:17.434 "hosts": [], 00:06:17.434 "serial_number": "SPDK00000000000002", 00:06:17.434 "model_number": "SPDK bdev Controller", 00:06:17.434 "max_namespaces": 32, 00:06:17.434 "min_cntlid": 1, 00:06:17.434 "max_cntlid": 65519, 00:06:17.434 "namespaces": [ 00:06:17.434 { 00:06:17.434 "nsid": 1, 00:06:17.434 "bdev_name": "Null2", 00:06:17.434 "name": "Null2", 00:06:17.434 "nguid": "6ECEBC9E80CD4A5DA85382D5E5271209", 00:06:17.434 "uuid": "6ecebc9e-80cd-4a5d-a853-82d5e5271209" 00:06:17.434 } 00:06:17.434 ] 00:06:17.434 }, 00:06:17.434 { 00:06:17.434 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:17.434 "subtype": "NVMe", 00:06:17.434 "listen_addresses": [ 00:06:17.434 { 00:06:17.434 "trtype": "TCP", 00:06:17.434 "adrfam": "IPv4", 00:06:17.434 "traddr": "10.0.0.2", 00:06:17.434 "trsvcid": "4420" 00:06:17.434 } 00:06:17.434 ], 00:06:17.434 "allow_any_host": true, 00:06:17.434 "hosts": [], 00:06:17.434 "serial_number": "SPDK00000000000003", 00:06:17.434 "model_number": "SPDK bdev Controller", 00:06:17.434 "max_namespaces": 32, 00:06:17.434 "min_cntlid": 1, 00:06:17.434 "max_cntlid": 65519, 00:06:17.434 "namespaces": [ 00:06:17.434 { 00:06:17.434 "nsid": 1, 00:06:17.434 "bdev_name": "Null3", 00:06:17.434 "name": "Null3", 00:06:17.434 "nguid": "23DE4D08D04E4ECD807EE8E7AF55DA92", 00:06:17.434 "uuid": "23de4d08-d04e-4ecd-807e-e8e7af55da92" 00:06:17.434 } 00:06:17.434 ] 00:06:17.434 }, 00:06:17.434 { 00:06:17.434 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:17.434 "subtype": "NVMe", 00:06:17.434 "listen_addresses": [ 00:06:17.434 { 00:06:17.434 "trtype": "TCP", 00:06:17.434 "adrfam": "IPv4", 00:06:17.434 "traddr": "10.0.0.2", 00:06:17.434 "trsvcid": "4420" 00:06:17.434 } 00:06:17.434 ], 00:06:17.434 "allow_any_host": true, 00:06:17.434 "hosts": [], 00:06:17.434 "serial_number": "SPDK00000000000004", 00:06:17.434 "model_number": "SPDK bdev Controller", 00:06:17.434 "max_namespaces": 32, 00:06:17.434 "min_cntlid": 1, 00:06:17.434 "max_cntlid": 65519, 00:06:17.434 "namespaces": [ 00:06:17.434 { 00:06:17.434 "nsid": 1, 00:06:17.434 "bdev_name": "Null4", 00:06:17.434 "name": "Null4", 00:06:17.434 "nguid": "F002C81B00394B3892BB9B8C2A3FB35D", 00:06:17.434 "uuid": "f002c81b-0039-4b38-92bb-9b8c2a3fb35d" 00:06:17.434 } 00:06:17.434 ] 00:06:17.434 } 00:06:17.434 ] 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.434 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.435 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:17.693 22:30:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:17.693 rmmod nvme_tcp 00:06:17.693 rmmod nvme_fabrics 00:06:17.693 rmmod nvme_keyring 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1158421 ']' 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1158421 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@942 -- # '[' -z 1158421 ']' 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # kill -0 1158421 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # uname 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1158421 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1158421' 00:06:17.693 killing process with pid 1158421 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@961 -- # kill 1158421 00:06:17.693 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # wait 1158421 00:06:17.951 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:17.951 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:17.951 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:17.951 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:17.951 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:17.951 22:30:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.951 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:17.951 22:30:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.487 22:30:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:20.487 00:06:20.487 real 0m5.468s 00:06:20.487 user 0m4.616s 00:06:20.487 sys 0m1.812s 00:06:20.487 22:30:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:20.487 22:30:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.487 ************************************ 00:06:20.487 END TEST nvmf_target_discovery 00:06:20.487 ************************************ 00:06:20.487 22:30:03 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:06:20.487 22:30:03 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:20.487 22:30:03 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:20.487 22:30:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:20.487 22:30:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.487 ************************************ 00:06:20.487 START TEST nvmf_referrals 00:06:20.487 ************************************ 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:20.487 * Looking for test storage... 00:06:20.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.487 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:20.488 22:30:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:22.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:22.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:22.396 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:22.396 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:22.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:06:22.396 00:06:22.396 --- 10.0.0.2 ping statistics --- 00:06:22.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.396 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:06:22.396 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:06:22.397 00:06:22.397 --- 10.0.0.1 ping statistics --- 00:06:22.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.397 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1160625 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1160625 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@823 -- # '[' -z 1160625 ']' 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:22.397 22:30:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.397 [2024-07-15 22:30:05.797059] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:06:22.397 [2024-07-15 22:30:05.797145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.397 [2024-07-15 22:30:05.870664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.655 [2024-07-15 22:30:05.992357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.655 [2024-07-15 22:30:05.992421] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.655 [2024-07-15 22:30:05.992437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.655 [2024-07-15 22:30:05.992450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.655 [2024-07-15 22:30:05.992462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.655 [2024-07-15 22:30:05.992549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.655 [2024-07-15 22:30:05.992604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.655 [2024-07-15 22:30:05.992656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.655 [2024-07-15 22:30:05.992659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.655 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:22.655 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # return 0 00:06:22.655 22:30:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:22.655 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.655 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.655 22:30:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.655 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:22.655 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:22.655 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.914 [2024-07-15 22:30:06.158123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.914 [2024-07-15 22:30:06.170393] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:22.914 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.172 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.173 22:30:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.431 22:30:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.690 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:23.948 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:23.948 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:23.948 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:23.948 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:23.948 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.948 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:24.206 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:24.206 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:24.206 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:24.206 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.206 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:24.206 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:24.207 rmmod nvme_tcp 00:06:24.207 rmmod nvme_fabrics 00:06:24.207 rmmod nvme_keyring 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1160625 ']' 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1160625 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@942 -- # '[' -z 1160625 ']' 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # kill -0 1160625 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # uname 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1160625 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1160625' 00:06:24.207 killing process with pid 1160625 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@961 -- # kill 1160625 00:06:24.207 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # wait 1160625 00:06:24.466 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:24.466 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:24.466 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:24.466 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:24.466 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:24.466 22:30:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.466 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:24.466 22:30:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.003 22:30:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:27.003 00:06:27.003 real 0m6.565s 00:06:27.003 user 0m9.130s 00:06:27.003 sys 0m2.153s 00:06:27.003 22:30:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:27.003 22:30:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.003 ************************************ 00:06:27.003 END TEST nvmf_referrals 00:06:27.003 ************************************ 00:06:27.003 22:30:10 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:06:27.003 22:30:10 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:27.003 22:30:10 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:27.003 22:30:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:27.004 22:30:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.004 ************************************ 00:06:27.004 START TEST nvmf_connect_disconnect 00:06:27.004 ************************************ 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:27.004 * Looking for test storage... 00:06:27.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:27.004 22:30:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:28.911 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:28.911 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:28.911 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:28.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.911 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:28.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:06:28.912 00:06:28.912 --- 10.0.0.2 ping statistics --- 00:06:28.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.912 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:06:28.912 00:06:28.912 --- 10.0.0.1 ping statistics --- 00:06:28.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.912 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1163425 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1163425 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@823 -- # '[' -z 1163425 ']' 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:28.912 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.912 [2024-07-15 22:30:12.353453] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:06:28.912 [2024-07-15 22:30:12.353524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.170 [2024-07-15 22:30:12.416329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.170 [2024-07-15 22:30:12.523524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.170 [2024-07-15 22:30:12.523591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.170 [2024-07-15 22:30:12.523619] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.170 [2024-07-15 22:30:12.523630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.170 [2024-07-15 22:30:12.523639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.170 [2024-07-15 22:30:12.523728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.170 [2024-07-15 22:30:12.523794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.170 [2024-07-15 22:30:12.523863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.170 [2024-07-15 22:30:12.523866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.170 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:29.170 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # return 0 00:06:29.170 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:29.170 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:29.170 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.431 [2024-07-15 22:30:12.680599] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.431 [2024-07-15 22:30:12.732124] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:29.431 22:30:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:31.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:35.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:37.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:40.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:43.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:43.663 rmmod nvme_tcp 00:06:43.663 rmmod nvme_fabrics 00:06:43.663 rmmod nvme_keyring 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1163425 ']' 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1163425 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@942 -- # '[' -z 1163425 ']' 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # kill -0 1163425 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # uname 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1163425 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1163425' 00:06:43.663 killing process with pid 1163425 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@961 -- # kill 1163425 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # wait 1163425 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.663 22:30:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.569 22:30:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:45.569 00:06:45.569 real 0m18.922s 00:06:45.569 user 0m56.988s 00:06:45.569 sys 0m3.240s 00:06:45.569 22:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:45.569 22:30:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:45.569 ************************************ 00:06:45.569 END TEST nvmf_connect_disconnect 00:06:45.569 ************************************ 00:06:45.569 22:30:28 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:06:45.569 22:30:28 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:45.570 22:30:28 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:45.570 22:30:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:45.570 22:30:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.570 ************************************ 00:06:45.570 START TEST nvmf_multitarget 00:06:45.570 ************************************ 00:06:45.570 22:30:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:45.570 * Looking for test storage... 00:06:45.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.827 22:30:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:47.730 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:47.730 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:47.730 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:47.730 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:47.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:06:47.730 00:06:47.730 --- 10.0.0.2 ping statistics --- 00:06:47.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.730 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:06:47.730 00:06:47.730 --- 10.0.0.1 ping statistics --- 00:06:47.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.730 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.730 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1167076 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1167076 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@823 -- # '[' -z 1167076 ']' 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:47.731 22:30:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.990 [2024-07-15 22:30:31.258947] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:06:47.991 [2024-07-15 22:30:31.259038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.991 [2024-07-15 22:30:31.328425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.991 [2024-07-15 22:30:31.450815] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:47.991 [2024-07-15 22:30:31.450893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:47.991 [2024-07-15 22:30:31.450911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.991 [2024-07-15 22:30:31.450926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.991 [2024-07-15 22:30:31.450943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:47.991 [2024-07-15 22:30:31.451002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.991 [2024-07-15 22:30:31.451058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.991 [2024-07-15 22:30:31.451110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.991 [2024-07-15 22:30:31.451114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # return 0 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:48.925 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:49.182 "nvmf_tgt_1" 00:06:49.182 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:49.182 "nvmf_tgt_2" 00:06:49.182 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:49.182 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:06:49.439 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:49.439 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:49.439 true 00:06:49.439 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:49.439 true 00:06:49.439 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:49.439 22:30:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:06:49.696 22:30:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:49.696 22:30:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:49.696 22:30:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:06:49.696 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:49.696 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:06:49.696 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:49.696 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:06:49.696 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:49.696 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:49.696 rmmod nvme_tcp 00:06:49.696 rmmod nvme_fabrics 00:06:49.696 rmmod nvme_keyring 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1167076 ']' 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1167076 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@942 -- # '[' -z 1167076 ']' 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # kill -0 1167076 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # uname 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1167076 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1167076' 00:06:49.697 killing process with pid 1167076 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@961 -- # kill 1167076 00:06:49.697 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # wait 1167076 00:06:49.955 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:49.955 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:49.955 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:49.955 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:49.955 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:49.955 22:30:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.955 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.955 22:30:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.488 22:30:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:52.488 00:06:52.488 real 0m6.413s 00:06:52.488 user 0m9.159s 00:06:52.488 sys 0m2.024s 00:06:52.488 22:30:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:52.488 22:30:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:52.488 ************************************ 00:06:52.488 END TEST nvmf_multitarget 00:06:52.488 ************************************ 00:06:52.488 22:30:35 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:06:52.488 22:30:35 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:52.488 22:30:35 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:52.488 22:30:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:52.488 22:30:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.488 ************************************ 00:06:52.488 START TEST nvmf_rpc 00:06:52.488 ************************************ 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:52.488 * Looking for test storage... 00:06:52.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:06:52.488 22:30:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:54.397 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:54.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:54.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:54.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:54.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:54.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:06:54.398 00:06:54.398 --- 10.0.0.2 ping statistics --- 00:06:54.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.398 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:06:54.398 00:06:54.398 --- 10.0.0.1 ping statistics --- 00:06:54.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.398 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1169301 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1169301 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@823 -- # '[' -z 1169301 ']' 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:54.398 22:30:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.398 [2024-07-15 22:30:37.794677] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:06:54.398 [2024-07-15 22:30:37.794746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.398 [2024-07-15 22:30:37.859925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.658 [2024-07-15 22:30:37.985543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.658 [2024-07-15 22:30:37.985595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.658 [2024-07-15 22:30:37.985621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.658 [2024-07-15 22:30:37.985634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.658 [2024-07-15 22:30:37.985646] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.658 [2024-07-15 22:30:37.985744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.658 [2024-07-15 22:30:37.985796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.658 [2024-07-15 22:30:37.985858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.658 [2024-07-15 22:30:37.985862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.324 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:55.324 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:55.324 22:30:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.324 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.324 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.324 22:30:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.324 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:06:55.324 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.324 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:06:55.583 "tick_rate": 2700000000, 00:06:55.583 "poll_groups": [ 00:06:55.583 { 00:06:55.583 "name": "nvmf_tgt_poll_group_000", 00:06:55.583 "admin_qpairs": 0, 00:06:55.583 "io_qpairs": 0, 00:06:55.583 "current_admin_qpairs": 0, 00:06:55.583 "current_io_qpairs": 0, 00:06:55.583 "pending_bdev_io": 0, 00:06:55.583 "completed_nvme_io": 0, 00:06:55.583 "transports": [] 00:06:55.583 }, 00:06:55.583 { 00:06:55.583 "name": "nvmf_tgt_poll_group_001", 00:06:55.583 "admin_qpairs": 0, 00:06:55.583 "io_qpairs": 0, 00:06:55.583 "current_admin_qpairs": 0, 00:06:55.583 "current_io_qpairs": 0, 00:06:55.583 "pending_bdev_io": 0, 00:06:55.583 "completed_nvme_io": 0, 00:06:55.583 "transports": [] 00:06:55.583 }, 00:06:55.583 { 00:06:55.583 "name": "nvmf_tgt_poll_group_002", 00:06:55.583 "admin_qpairs": 0, 00:06:55.583 "io_qpairs": 0, 00:06:55.583 "current_admin_qpairs": 0, 00:06:55.583 "current_io_qpairs": 0, 00:06:55.583 "pending_bdev_io": 0, 00:06:55.583 "completed_nvme_io": 0, 00:06:55.583 "transports": [] 00:06:55.583 }, 00:06:55.583 { 00:06:55.583 "name": "nvmf_tgt_poll_group_003", 00:06:55.583 "admin_qpairs": 0, 00:06:55.583 "io_qpairs": 0, 00:06:55.583 "current_admin_qpairs": 0, 00:06:55.583 "current_io_qpairs": 0, 00:06:55.583 "pending_bdev_io": 0, 00:06:55.583 "completed_nvme_io": 0, 00:06:55.583 "transports": [] 00:06:55.583 } 00:06:55.583 ] 00:06:55.583 }' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.583 [2024-07-15 22:30:38.917558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:06:55.583 "tick_rate": 2700000000, 00:06:55.583 "poll_groups": [ 00:06:55.583 { 00:06:55.583 "name": "nvmf_tgt_poll_group_000", 00:06:55.583 "admin_qpairs": 0, 00:06:55.583 "io_qpairs": 0, 00:06:55.583 "current_admin_qpairs": 0, 00:06:55.583 "current_io_qpairs": 0, 00:06:55.583 "pending_bdev_io": 0, 00:06:55.583 "completed_nvme_io": 0, 00:06:55.583 "transports": [ 00:06:55.583 { 00:06:55.583 "trtype": "TCP" 00:06:55.583 } 00:06:55.583 ] 00:06:55.583 }, 00:06:55.583 { 00:06:55.583 "name": "nvmf_tgt_poll_group_001", 00:06:55.583 "admin_qpairs": 0, 00:06:55.583 "io_qpairs": 0, 00:06:55.583 "current_admin_qpairs": 0, 00:06:55.583 "current_io_qpairs": 0, 00:06:55.583 "pending_bdev_io": 0, 00:06:55.583 "completed_nvme_io": 0, 00:06:55.583 "transports": [ 00:06:55.583 { 00:06:55.583 "trtype": "TCP" 00:06:55.583 } 00:06:55.583 ] 00:06:55.583 }, 00:06:55.583 { 00:06:55.583 "name": "nvmf_tgt_poll_group_002", 00:06:55.583 "admin_qpairs": 0, 00:06:55.583 "io_qpairs": 0, 00:06:55.583 "current_admin_qpairs": 0, 00:06:55.583 "current_io_qpairs": 0, 00:06:55.583 "pending_bdev_io": 0, 00:06:55.583 "completed_nvme_io": 0, 00:06:55.583 "transports": [ 00:06:55.583 { 00:06:55.583 "trtype": "TCP" 00:06:55.583 } 00:06:55.583 ] 00:06:55.583 }, 00:06:55.583 { 00:06:55.583 "name": "nvmf_tgt_poll_group_003", 00:06:55.583 "admin_qpairs": 0, 00:06:55.583 "io_qpairs": 0, 00:06:55.583 "current_admin_qpairs": 0, 00:06:55.583 "current_io_qpairs": 0, 00:06:55.583 "pending_bdev_io": 0, 00:06:55.583 "completed_nvme_io": 0, 00:06:55.583 "transports": [ 00:06:55.583 { 00:06:55.583 "trtype": "TCP" 00:06:55.583 } 00:06:55.583 ] 00:06:55.583 } 00:06:55.583 ] 00:06:55.583 }' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:06:55.583 22:30:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.583 Malloc1 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.583 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.584 [2024-07-15 22:30:39.065194] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # local es=0 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@630 -- # local arg=nvme 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # type -t nvme 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # type -P nvme 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # arg=/usr/sbin/nvme 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # [[ -x /usr/sbin/nvme ]] 00:06:55.584 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:55.852 [2024-07-15 22:30:39.087628] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:55.852 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:55.852 could not add new controller: failed to write to nvme-fabrics device 00:06:55.852 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # es=1 00:06:55.852 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:55.852 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:55.852 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:55.852 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.852 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.852 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.852 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.852 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:56.418 22:30:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:06:56.418 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:06:56.418 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:06:56.418 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:06:56.418 22:30:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:58.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:06:58.324 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # local es=0 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@630 -- # local arg=nvme 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # type -t nvme 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # type -P nvme 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # arg=/usr/sbin/nvme 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # [[ -x /usr/sbin/nvme ]] 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:58.585 [2024-07-15 22:30:41.868975] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:58.585 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:58.585 could not add new controller: failed to write to nvme-fabrics device 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # es=1 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:58.585 22:30:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.154 22:30:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:06:59.154 22:30:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:06:59.154 22:30:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:06:59.154 22:30:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:06:59.154 22:30:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:01.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.694 [2024-07-15 22:30:44.707467] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:01.694 22:30:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:01.955 22:30:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:01.955 22:30:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:07:01.955 22:30:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:01.955 22:30:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:01.955 22:30:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:04.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 [2024-07-15 22:30:47.521661] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 22:30:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:04.490 22:30:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:04.749 22:30:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:04.749 22:30:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:07:04.749 22:30:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:04.749 22:30:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:04.749 22:30:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.290 [2024-07-15 22:30:50.349951] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.290 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.550 22:30:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.550 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:07:07.550 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.550 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:07.550 22:30:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:10.085 22:30:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:10.085 22:30:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:10.085 22:30:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.085 22:30:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:10.085 22:30:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.085 22:30:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:10.085 22:30:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.085 [2024-07-15 22:30:53.135646] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.085 22:30:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.344 22:30:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:10.344 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:07:10.344 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:10.344 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:10.344 22:30:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:12.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.874 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.875 [2024-07-15 22:30:55.948060] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:12.875 22:30:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.449 22:30:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.449 22:30:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:07:13.449 22:30:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.449 22:30:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:13.449 22:30:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:15.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 [2024-07-15 22:30:58.820536] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 [2024-07-15 22:30:58.868588] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.386 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 [2024-07-15 22:30:58.916764] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 [2024-07-15 22:30:58.964948] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 [2024-07-15 22:30:59.013114] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:15.645 "tick_rate": 2700000000, 00:07:15.645 "poll_groups": [ 00:07:15.645 { 00:07:15.645 "name": "nvmf_tgt_poll_group_000", 00:07:15.645 "admin_qpairs": 2, 00:07:15.645 "io_qpairs": 84, 00:07:15.645 "current_admin_qpairs": 0, 00:07:15.645 "current_io_qpairs": 0, 00:07:15.645 "pending_bdev_io": 0, 00:07:15.645 "completed_nvme_io": 184, 00:07:15.645 "transports": [ 00:07:15.645 { 00:07:15.645 "trtype": "TCP" 00:07:15.645 } 00:07:15.645 ] 00:07:15.645 }, 00:07:15.645 { 00:07:15.645 "name": "nvmf_tgt_poll_group_001", 00:07:15.645 "admin_qpairs": 2, 00:07:15.645 "io_qpairs": 84, 00:07:15.645 "current_admin_qpairs": 0, 00:07:15.645 "current_io_qpairs": 0, 00:07:15.645 "pending_bdev_io": 0, 00:07:15.645 "completed_nvme_io": 134, 00:07:15.645 "transports": [ 00:07:15.645 { 00:07:15.645 "trtype": "TCP" 00:07:15.645 } 00:07:15.645 ] 00:07:15.645 }, 00:07:15.645 { 00:07:15.645 "name": "nvmf_tgt_poll_group_002", 00:07:15.645 "admin_qpairs": 1, 00:07:15.645 "io_qpairs": 84, 00:07:15.645 "current_admin_qpairs": 0, 00:07:15.645 "current_io_qpairs": 0, 00:07:15.645 "pending_bdev_io": 0, 00:07:15.645 "completed_nvme_io": 182, 00:07:15.645 "transports": [ 00:07:15.645 { 00:07:15.645 "trtype": "TCP" 00:07:15.645 } 00:07:15.645 ] 00:07:15.645 }, 00:07:15.645 { 00:07:15.645 "name": "nvmf_tgt_poll_group_003", 00:07:15.645 "admin_qpairs": 2, 00:07:15.645 "io_qpairs": 84, 00:07:15.645 "current_admin_qpairs": 0, 00:07:15.645 "current_io_qpairs": 0, 00:07:15.645 "pending_bdev_io": 0, 00:07:15.645 "completed_nvme_io": 186, 00:07:15.645 "transports": [ 00:07:15.645 { 00:07:15.645 "trtype": "TCP" 00:07:15.645 } 00:07:15.645 ] 00:07:15.645 } 00:07:15.645 ] 00:07:15.645 }' 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:15.645 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:15.646 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:15.646 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:15.646 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:15.646 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:15.646 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:15.646 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:15.646 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:15.646 22:30:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:15.646 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:15.646 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:15.904 rmmod nvme_tcp 00:07:15.904 rmmod nvme_fabrics 00:07:15.904 rmmod nvme_keyring 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1169301 ']' 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1169301 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@942 -- # '[' -z 1169301 ']' 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # kill -0 1169301 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # uname 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1169301 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1169301' 00:07:15.904 killing process with pid 1169301 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@961 -- # kill 1169301 00:07:15.904 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # wait 1169301 00:07:16.161 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:16.161 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:16.161 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:16.161 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:16.161 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:16.161 22:30:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.161 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.161 22:30:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.695 22:31:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:18.695 00:07:18.695 real 0m26.108s 00:07:18.695 user 1m25.410s 00:07:18.695 sys 0m4.185s 00:07:18.695 22:31:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:18.695 22:31:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.695 ************************************ 00:07:18.695 END TEST nvmf_rpc 00:07:18.695 ************************************ 00:07:18.695 22:31:01 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:18.695 22:31:01 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:18.695 22:31:01 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:18.695 22:31:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:18.695 22:31:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.695 ************************************ 00:07:18.695 START TEST nvmf_invalid 00:07:18.695 ************************************ 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:18.695 * Looking for test storage... 00:07:18.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:18.695 22:31:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.696 22:31:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:20.071 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:20.071 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:20.071 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:20.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.071 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.328 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:20.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:07:20.329 00:07:20.329 --- 10.0.0.2 ping statistics --- 00:07:20.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.329 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:07:20.329 00:07:20.329 --- 10.0.0.1 ping statistics --- 00:07:20.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.329 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1173925 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1173925 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@823 -- # '[' -z 1173925 ']' 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:20.329 22:31:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:20.329 [2024-07-15 22:31:03.766246] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:07:20.329 [2024-07-15 22:31:03.766329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.588 [2024-07-15 22:31:03.835079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.588 [2024-07-15 22:31:03.956175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.588 [2024-07-15 22:31:03.956257] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.588 [2024-07-15 22:31:03.956273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.588 [2024-07-15 22:31:03.956286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.588 [2024-07-15 22:31:03.956297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.588 [2024-07-15 22:31:03.956385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.588 [2024-07-15 22:31:03.956443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.588 [2024-07-15 22:31:03.956497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.588 [2024-07-15 22:31:03.956500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.523 22:31:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:21.523 22:31:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # return 0 00:07:21.523 22:31:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:21.523 22:31:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.523 22:31:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:21.523 22:31:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.523 22:31:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:21.523 22:31:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24179 00:07:21.523 [2024-07-15 22:31:04.992872] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:21.523 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:21.523 { 00:07:21.523 "nqn": "nqn.2016-06.io.spdk:cnode24179", 00:07:21.523 "tgt_name": "foobar", 00:07:21.523 "method": "nvmf_create_subsystem", 00:07:21.523 "req_id": 1 00:07:21.523 } 00:07:21.523 Got JSON-RPC error response 00:07:21.523 response: 00:07:21.523 { 00:07:21.523 "code": -32603, 00:07:21.523 "message": "Unable to find target foobar" 00:07:21.523 }' 00:07:21.523 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:21.523 { 00:07:21.523 "nqn": "nqn.2016-06.io.spdk:cnode24179", 00:07:21.523 "tgt_name": "foobar", 00:07:21.523 "method": "nvmf_create_subsystem", 00:07:21.523 "req_id": 1 00:07:21.523 } 00:07:21.523 Got JSON-RPC error response 00:07:21.523 response: 00:07:21.523 { 00:07:21.523 "code": -32603, 00:07:21.523 "message": "Unable to find target foobar" 00:07:21.523 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:21.523 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:21.523 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12474 00:07:21.781 [2024-07-15 22:31:05.233667] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12474: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:21.781 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:21.781 { 00:07:21.781 "nqn": "nqn.2016-06.io.spdk:cnode12474", 00:07:21.781 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:21.781 "method": "nvmf_create_subsystem", 00:07:21.781 "req_id": 1 00:07:21.781 } 00:07:21.781 Got JSON-RPC error response 00:07:21.781 response: 00:07:21.781 { 00:07:21.781 "code": -32602, 00:07:21.781 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:21.781 }' 00:07:21.781 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:21.781 { 00:07:21.781 "nqn": "nqn.2016-06.io.spdk:cnode12474", 00:07:21.781 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:21.781 "method": "nvmf_create_subsystem", 00:07:21.781 "req_id": 1 00:07:21.781 } 00:07:21.781 Got JSON-RPC error response 00:07:21.781 response: 00:07:21.781 { 00:07:21.781 "code": -32602, 00:07:21.781 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:21.781 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:21.781 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:21.781 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30689 00:07:22.040 [2024-07-15 22:31:05.482489] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30689: invalid model number 'SPDK_Controller' 00:07:22.040 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:22.040 { 00:07:22.040 "nqn": "nqn.2016-06.io.spdk:cnode30689", 00:07:22.040 "model_number": "SPDK_Controller\u001f", 00:07:22.040 "method": "nvmf_create_subsystem", 00:07:22.040 "req_id": 1 00:07:22.040 } 00:07:22.040 Got JSON-RPC error response 00:07:22.040 response: 00:07:22.040 { 00:07:22.040 "code": -32602, 00:07:22.040 "message": "Invalid MN SPDK_Controller\u001f" 00:07:22.040 }' 00:07:22.040 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:22.040 { 00:07:22.040 "nqn": "nqn.2016-06.io.spdk:cnode30689", 00:07:22.040 "model_number": "SPDK_Controller\u001f", 00:07:22.040 "method": "nvmf_create_subsystem", 00:07:22.040 "req_id": 1 00:07:22.040 } 00:07:22.040 Got JSON-RPC error response 00:07:22.040 response: 00:07:22.040 { 00:07:22.040 "code": -32602, 00:07:22.040 "message": "Invalid MN SPDK_Controller\u001f" 00:07:22.040 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:22.040 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:22.040 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:22.040 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:22.040 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:22.040 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:22.040 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:22.040 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.041 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '$72q-;M@]&aIeG5Fx{k"X' 00:07:22.301 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '$72q-;M@]&aIeG5Fx{k"X' nqn.2016-06.io.spdk:cnode18793 00:07:22.301 [2024-07-15 22:31:05.791497] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18793: invalid serial number '$72q-;M@]&aIeG5Fx{k"X' 00:07:22.559 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:22.559 { 00:07:22.559 "nqn": "nqn.2016-06.io.spdk:cnode18793", 00:07:22.559 "serial_number": "$72q-;M@]&aIeG5Fx{k\"X", 00:07:22.559 "method": "nvmf_create_subsystem", 00:07:22.559 "req_id": 1 00:07:22.559 } 00:07:22.559 Got JSON-RPC error response 00:07:22.560 response: 00:07:22.560 { 00:07:22.560 "code": -32602, 00:07:22.560 "message": "Invalid SN $72q-;M@]&aIeG5Fx{k\"X" 00:07:22.560 }' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:22.560 { 00:07:22.560 "nqn": "nqn.2016-06.io.spdk:cnode18793", 00:07:22.560 "serial_number": "$72q-;M@]&aIeG5Fx{k\"X", 00:07:22.560 "method": "nvmf_create_subsystem", 00:07:22.560 "req_id": 1 00:07:22.560 } 00:07:22.560 Got JSON-RPC error response 00:07:22.560 response: 00:07:22.560 { 00:07:22.560 "code": -32602, 00:07:22.560 "message": "Invalid SN $72q-;M@]&aIeG5Fx{k\"X" 00:07:22.560 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.560 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ S == \- ]] 00:07:22.561 22:31:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'SvB;wN= /dev/null' 00:07:25.400 22:31:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.306 22:31:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:27.306 00:07:27.306 real 0m9.075s 00:07:27.306 user 0m22.383s 00:07:27.306 sys 0m2.341s 00:07:27.306 22:31:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:27.306 22:31:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:27.306 ************************************ 00:07:27.306 END TEST nvmf_invalid 00:07:27.306 ************************************ 00:07:27.306 22:31:10 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:27.306 22:31:10 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:27.306 22:31:10 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:27.306 22:31:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:27.306 22:31:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.306 ************************************ 00:07:27.306 START TEST nvmf_abort 00:07:27.306 ************************************ 00:07:27.306 22:31:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:27.306 * Looking for test storage... 00:07:27.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.564 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:27.565 22:31:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:29.518 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:29.518 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:29.518 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:29.518 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:29.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:07:29.518 00:07:29.518 --- 10.0.0.2 ping statistics --- 00:07:29.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.518 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:07:29.518 00:07:29.518 --- 10.0.0.1 ping statistics --- 00:07:29.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.518 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1176567 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1176567 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@823 -- # '[' -z 1176567 ']' 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.518 22:31:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:29.519 22:31:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:29.519 22:31:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.519 22:31:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:29.519 22:31:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.519 [2024-07-15 22:31:12.975015] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:07:29.519 [2024-07-15 22:31:12.975100] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.778 [2024-07-15 22:31:13.046235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.778 [2024-07-15 22:31:13.166669] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.778 [2024-07-15 22:31:13.166734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.778 [2024-07-15 22:31:13.166750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.778 [2024-07-15 22:31:13.166763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.778 [2024-07-15 22:31:13.166775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.778 [2024-07-15 22:31:13.166835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.778 [2024-07-15 22:31:13.166903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.778 [2024-07-15 22:31:13.166908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # return 0 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.709 [2024-07-15 22:31:13.945379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.709 Malloc0 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.709 Delay0 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:30.709 22:31:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.709 [2024-07-15 22:31:14.012981] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:30.709 22:31:14 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:30.709 [2024-07-15 22:31:14.160018] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:33.242 Initializing NVMe Controllers 00:07:33.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:33.243 controller IO queue size 128 less than required 00:07:33.243 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:33.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:33.243 Initialization complete. Launching workers. 00:07:33.243 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31481 00:07:33.243 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31542, failed to submit 62 00:07:33.243 success 31485, unsuccess 57, failed 0 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.243 rmmod nvme_tcp 00:07:33.243 rmmod nvme_fabrics 00:07:33.243 rmmod nvme_keyring 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1176567 ']' 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1176567 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@942 -- # '[' -z 1176567 ']' 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # kill -0 1176567 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # uname 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1176567 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1176567' 00:07:33.243 killing process with pid 1176567 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@961 -- # kill 1176567 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # wait 1176567 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.243 22:31:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.780 22:31:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.780 00:07:35.780 real 0m7.909s 00:07:35.780 user 0m12.694s 00:07:35.780 sys 0m2.516s 00:07:35.780 22:31:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:35.780 22:31:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.780 ************************************ 00:07:35.780 END TEST nvmf_abort 00:07:35.780 ************************************ 00:07:35.780 22:31:18 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:35.780 22:31:18 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:35.780 22:31:18 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:35.780 22:31:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:35.780 22:31:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.780 ************************************ 00:07:35.780 START TEST nvmf_ns_hotplug_stress 00:07:35.780 ************************************ 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:35.780 * Looking for test storage... 00:07:35.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.780 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.781 22:31:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:37.689 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:37.689 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:37.689 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:37.689 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:07:37.689 00:07:37.689 --- 10.0.0.2 ping statistics --- 00:07:37.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.689 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:07:37.689 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:07:37.689 00:07:37.689 --- 10.0.0.1 ping statistics --- 00:07:37.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.690 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1178921 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1178921 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@823 -- # '[' -z 1178921 ']' 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:37.690 22:31:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.690 [2024-07-15 22:31:20.990340] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:07:37.690 [2024-07-15 22:31:20.990435] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.690 [2024-07-15 22:31:21.059138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.690 [2024-07-15 22:31:21.180207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.690 [2024-07-15 22:31:21.180265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.690 [2024-07-15 22:31:21.180291] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.690 [2024-07-15 22:31:21.180304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.690 [2024-07-15 22:31:21.180315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.690 [2024-07-15 22:31:21.180406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.690 [2024-07-15 22:31:21.180460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.690 [2024-07-15 22:31:21.180463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.625 22:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:38.625 22:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # return 0 00:07:38.625 22:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.625 22:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.625 22:31:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:38.625 22:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.625 22:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:38.625 22:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:38.890 [2024-07-15 22:31:22.271564] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.890 22:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:39.148 22:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.405 [2024-07-15 22:31:22.786503] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.405 22:31:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.663 22:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:39.922 Malloc0 00:07:39.922 22:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:40.182 Delay0 00:07:40.182 22:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.440 22:31:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:40.697 NULL1 00:07:40.697 22:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:40.955 22:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1179348 00:07:40.955 22:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:40.955 22:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:40.955 22:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.213 22:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.470 22:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:41.470 22:31:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:41.728 true 00:07:41.728 22:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:41.728 22:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.986 22:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.243 22:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:42.243 22:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:42.501 true 00:07:42.501 22:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:42.501 22:31:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.437 Read completed with error (sct=0, sc=11) 00:07:43.437 22:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.695 22:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:43.695 22:31:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:43.953 true 00:07:43.953 22:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:43.953 22:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.211 22:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.469 22:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:44.469 22:31:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:44.726 true 00:07:44.726 22:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:44.726 22:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.698 22:31:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.698 22:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:45.698 22:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:46.264 true 00:07:46.264 22:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:46.264 22:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.264 22:31:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.522 22:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:46.522 22:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:46.779 true 00:07:46.779 22:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:46.779 22:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.036 22:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.294 22:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:47.294 22:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:47.553 true 00:07:47.553 22:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:47.553 22:31:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.929 22:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.929 22:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:48.929 22:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:49.184 true 00:07:49.184 22:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:49.184 22:31:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.116 22:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.116 22:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:50.116 22:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:50.372 true 00:07:50.372 22:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:50.372 22:31:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.628 22:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.884 22:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:50.884 22:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:51.140 true 00:07:51.140 22:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:51.140 22:31:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.092 22:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.349 22:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:52.349 22:31:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:52.607 true 00:07:52.607 22:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:52.607 22:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.865 22:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.122 22:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:53.122 22:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:53.378 true 00:07:53.378 22:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:53.378 22:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.313 22:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.570 22:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:54.570 22:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:54.828 true 00:07:54.828 22:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:54.828 22:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.086 22:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.344 22:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:55.344 22:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:55.602 true 00:07:55.602 22:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:55.602 22:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.538 22:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.538 22:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:56.538 22:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:56.796 true 00:07:56.796 22:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:56.796 22:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.053 22:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.312 22:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:57.312 22:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:57.570 true 00:07:57.570 22:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:57.570 22:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.508 22:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.766 22:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:58.766 22:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:59.026 true 00:07:59.026 22:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:59.026 22:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.352 22:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.610 22:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:59.610 22:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:59.868 true 00:07:59.868 22:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:07:59.868 22:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.125 22:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.382 22:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:00.382 22:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:00.640 true 00:08:00.640 22:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:00.640 22:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.576 22:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.834 22:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:01.834 22:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:02.400 true 00:08:02.400 22:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:02.400 22:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.400 22:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.658 22:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:02.659 22:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:02.916 true 00:08:02.916 22:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:02.917 22:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.174 22:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.432 22:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:03.432 22:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:03.689 true 00:08:03.689 22:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:03.689 22:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.065 22:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.065 22:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:05.065 22:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:05.322 true 00:08:05.322 22:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:05.322 22:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.579 22:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.835 22:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:05.836 22:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:06.092 true 00:08:06.092 22:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:06.092 22:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.024 22:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.282 22:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:07.282 22:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:07.539 true 00:08:07.539 22:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:07.539 22:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.797 22:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.054 22:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:08.054 22:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:08.311 true 00:08:08.311 22:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:08.311 22:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.245 22:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.245 22:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:09.245 22:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:09.504 true 00:08:09.504 22:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:09.504 22:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.763 22:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.020 22:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:10.020 22:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:10.278 true 00:08:10.278 22:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:10.278 22:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.535 22:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.793 22:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:10.793 22:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:11.051 true 00:08:11.051 22:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:11.051 22:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.429 Initializing NVMe Controllers 00:08:12.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:12.429 Controller IO queue size 128, less than required. 00:08:12.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.429 Controller IO queue size 128, less than required. 00:08:12.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:12.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:12.429 Initialization complete. Launching workers. 00:08:12.429 ======================================================== 00:08:12.429 Latency(us) 00:08:12.429 Device Information : IOPS MiB/s Average min max 00:08:12.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 737.43 0.36 83729.15 2325.37 1011813.18 00:08:12.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9489.84 4.63 13448.91 2604.86 448961.03 00:08:12.429 ======================================================== 00:08:12.429 Total : 10227.27 4.99 18516.40 2325.37 1011813.18 00:08:12.429 00:08:12.429 22:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.429 22:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:12.429 22:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:12.687 true 00:08:12.687 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1179348 00:08:12.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1179348) - No such process 00:08:12.687 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1179348 00:08:12.687 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.951 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.240 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:13.240 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:13.240 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:13.240 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.240 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:13.498 null0 00:08:13.498 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.498 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.498 22:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:13.756 null1 00:08:13.756 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.756 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.756 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:14.014 null2 00:08:14.014 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.014 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.014 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:14.272 null3 00:08:14.272 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.272 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.272 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:14.530 null4 00:08:14.530 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.530 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.530 22:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:14.788 null5 00:08:14.788 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.788 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.788 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:15.046 null6 00:08:15.046 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.046 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.046 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:15.304 null7 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.304 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1183480 1183482 1183485 1183489 1183492 1183495 1183499 1183501 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.563 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.563 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.563 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.563 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.563 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.563 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.563 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.563 22:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.822 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.080 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.080 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.080 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.080 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.080 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.081 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.081 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.081 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.339 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.597 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.597 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.597 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.597 22:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.597 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.597 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.597 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.597 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.855 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.113 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.113 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.113 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.113 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.113 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.113 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.113 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.113 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.372 22:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.630 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.630 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.630 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.630 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.630 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.630 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.630 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.630 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.888 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.146 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.146 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.147 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.147 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.147 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.147 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.147 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.147 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.147 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.147 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.405 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.664 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.664 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.664 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.664 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.664 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.664 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.664 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.664 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.664 22:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.664 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.664 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.922 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.922 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.922 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.922 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.922 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.922 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.180 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.438 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.438 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.438 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.438 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.438 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.438 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.438 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.438 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.696 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.697 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.697 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.697 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.697 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.697 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.697 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.697 22:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.697 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.697 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.697 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.954 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.954 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.954 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.954 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.954 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.954 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.954 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.954 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.211 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.468 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.468 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.468 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.468 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.468 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.468 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.468 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.468 22:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.726 rmmod nvme_tcp 00:08:20.726 rmmod nvme_fabrics 00:08:20.726 rmmod nvme_keyring 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1178921 ']' 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1178921 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@942 -- # '[' -z 1178921 ']' 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # kill -0 1178921 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # uname 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1178921 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:08:20.726 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:08:20.727 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1178921' 00:08:20.727 killing process with pid 1178921 00:08:20.727 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@961 -- # kill 1178921 00:08:20.727 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # wait 1178921 00:08:21.293 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.293 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.293 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.293 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.293 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.293 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.294 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.294 22:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.194 22:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.194 00:08:23.194 real 0m47.828s 00:08:23.194 user 3m37.724s 00:08:23.194 sys 0m16.534s 00:08:23.194 22:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:23.194 22:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.194 ************************************ 00:08:23.194 END TEST nvmf_ns_hotplug_stress 00:08:23.194 ************************************ 00:08:23.194 22:32:06 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:23.194 22:32:06 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:23.194 22:32:06 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:23.194 22:32:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:23.194 22:32:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.194 ************************************ 00:08:23.194 START TEST nvmf_connect_stress 00:08:23.194 ************************************ 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:23.195 * Looking for test storage... 00:08:23.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.195 22:32:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.722 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.722 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.723 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.723 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:08:25.723 00:08:25.723 --- 10.0.0.2 ping statistics --- 00:08:25.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.723 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:08:25.723 00:08:25.723 --- 10.0.0.1 ping statistics --- 00:08:25.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.723 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1186295 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1186295 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@823 -- # '[' -z 1186295 ']' 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:25.723 22:32:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.723 [2024-07-15 22:32:08.906215] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:08:25.723 [2024-07-15 22:32:08.906310] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.723 [2024-07-15 22:32:08.975354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.723 [2024-07-15 22:32:09.091065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.723 [2024-07-15 22:32:09.091151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.723 [2024-07-15 22:32:09.091169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.723 [2024-07-15 22:32:09.091182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.723 [2024-07-15 22:32:09.091193] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.723 [2024-07-15 22:32:09.091310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.723 [2024-07-15 22:32:09.091404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.723 [2024-07-15 22:32:09.091407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # return 0 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.655 [2024-07-15 22:32:09.891113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.655 [2024-07-15 22:32:09.924016] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.655 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.656 NULL1 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1186450 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.656 22:32:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.912 22:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.912 22:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:26.912 22:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.912 22:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.912 22:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.191 22:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:27.191 22:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:27.191 22:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.191 22:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:27.191 22:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.454 22:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:27.454 22:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:27.454 22:32:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.454 22:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:27.454 22:32:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.018 22:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:28.018 22:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:28.018 22:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.019 22:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:28.019 22:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.276 22:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:28.276 22:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:28.276 22:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.276 22:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:28.276 22:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.534 22:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:28.534 22:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:28.534 22:32:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.534 22:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:28.534 22:32:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.792 22:32:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:28.792 22:32:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:28.792 22:32:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.792 22:32:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:28.792 22:32:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.356 22:32:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:29.356 22:32:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:29.356 22:32:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.356 22:32:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:29.356 22:32:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.614 22:32:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:29.614 22:32:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:29.614 22:32:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.614 22:32:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:29.614 22:32:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.871 22:32:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:29.871 22:32:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:29.871 22:32:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.871 22:32:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:29.871 22:32:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.128 22:32:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:30.128 22:32:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:30.128 22:32:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.128 22:32:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:30.128 22:32:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.385 22:32:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:30.385 22:32:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:30.385 22:32:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.385 22:32:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:30.385 22:32:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.949 22:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:30.949 22:32:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:30.949 22:32:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.949 22:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:30.949 22:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.206 22:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:31.206 22:32:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:31.206 22:32:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.206 22:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:31.206 22:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.464 22:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:31.464 22:32:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:31.464 22:32:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.464 22:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:31.464 22:32:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.722 22:32:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:31.722 22:32:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:31.722 22:32:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.722 22:32:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:31.722 22:32:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.979 22:32:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:31.979 22:32:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:31.979 22:32:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.979 22:32:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:31.979 22:32:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.543 22:32:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:32.543 22:32:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:32.543 22:32:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.543 22:32:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:32.543 22:32:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.800 22:32:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:32.800 22:32:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:32.800 22:32:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.800 22:32:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:32.800 22:32:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.057 22:32:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:33.057 22:32:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:33.057 22:32:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.057 22:32:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:33.057 22:32:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.315 22:32:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:33.315 22:32:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:33.315 22:32:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.315 22:32:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:33.315 22:32:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.573 22:32:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:33.573 22:32:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:33.573 22:32:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.573 22:32:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:33.573 22:32:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.139 22:32:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:34.139 22:32:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:34.139 22:32:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.139 22:32:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:34.139 22:32:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.397 22:32:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:34.397 22:32:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:34.397 22:32:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.397 22:32:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:34.397 22:32:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.654 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:34.654 22:32:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:34.654 22:32:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.654 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:34.654 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.911 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:34.911 22:32:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:34.911 22:32:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.911 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:34.911 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.169 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:35.169 22:32:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:35.169 22:32:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.169 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:35.169 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.735 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:35.735 22:32:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:35.735 22:32:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.735 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:35.735 22:32:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.992 22:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:35.992 22:32:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:35.992 22:32:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.992 22:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:35.992 22:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.250 22:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.250 22:32:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:36.250 22:32:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.250 22:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.250 22:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.508 22:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.508 22:32:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:36.508 22:32:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.508 22:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.508 22:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.766 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1186450 00:08:36.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1186450) - No such process 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1186450 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.766 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.025 rmmod nvme_tcp 00:08:37.025 rmmod nvme_fabrics 00:08:37.025 rmmod nvme_keyring 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1186295 ']' 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1186295 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@942 -- # '[' -z 1186295 ']' 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # kill -0 1186295 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # uname 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1186295 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1186295' 00:08:37.025 killing process with pid 1186295 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@961 -- # kill 1186295 00:08:37.025 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # wait 1186295 00:08:37.285 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:37.285 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:37.285 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:37.285 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.285 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:37.285 22:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.285 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.285 22:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.216 22:32:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:39.216 00:08:39.216 real 0m16.065s 00:08:39.216 user 0m40.482s 00:08:39.216 sys 0m6.052s 00:08:39.216 22:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:39.216 22:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.216 ************************************ 00:08:39.216 END TEST nvmf_connect_stress 00:08:39.216 ************************************ 00:08:39.216 22:32:22 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:39.216 22:32:22 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:39.216 22:32:22 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:39.216 22:32:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:39.216 22:32:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.216 ************************************ 00:08:39.216 START TEST nvmf_fused_ordering 00:08:39.216 ************************************ 00:08:39.216 22:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:39.474 * Looking for test storage... 00:08:39.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.474 22:32:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.474 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:39.474 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.474 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:39.475 22:32:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.380 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:41.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:41.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:41.381 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:41.381 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:41.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:08:41.381 00:08:41.381 --- 10.0.0.2 ping statistics --- 00:08:41.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.381 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:08:41.381 00:08:41.381 --- 10.0.0.1 ping statistics --- 00:08:41.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.381 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1189606 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1189606 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@823 -- # '[' -z 1189606 ']' 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:41.381 22:32:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:41.641 [2024-07-15 22:32:24.893925] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:08:41.642 [2024-07-15 22:32:24.894005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.642 [2024-07-15 22:32:24.958450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.642 [2024-07-15 22:32:25.074289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.642 [2024-07-15 22:32:25.074356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.642 [2024-07-15 22:32:25.074371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.642 [2024-07-15 22:32:25.074384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.642 [2024-07-15 22:32:25.074395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.642 [2024-07-15 22:32:25.074433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # return 0 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:42.580 [2024-07-15 22:32:25.881210] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:42.580 [2024-07-15 22:32:25.897351] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:42.580 NULL1 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:42.580 22:32:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:42.580 [2024-07-15 22:32:25.940883] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:08:42.580 [2024-07-15 22:32:25.940918] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189760 ] 00:08:43.151 Attached to nqn.2016-06.io.spdk:cnode1 00:08:43.151 Namespace ID: 1 size: 1GB 00:08:43.151 fused_ordering(0) 00:08:43.151 fused_ordering(1) 00:08:43.151 fused_ordering(2) 00:08:43.151 fused_ordering(3) 00:08:43.151 fused_ordering(4) 00:08:43.151 fused_ordering(5) 00:08:43.151 fused_ordering(6) 00:08:43.151 fused_ordering(7) 00:08:43.151 fused_ordering(8) 00:08:43.151 fused_ordering(9) 00:08:43.151 fused_ordering(10) 00:08:43.151 fused_ordering(11) 00:08:43.151 fused_ordering(12) 00:08:43.151 fused_ordering(13) 00:08:43.151 fused_ordering(14) 00:08:43.151 fused_ordering(15) 00:08:43.151 fused_ordering(16) 00:08:43.151 fused_ordering(17) 00:08:43.151 fused_ordering(18) 00:08:43.151 fused_ordering(19) 00:08:43.151 fused_ordering(20) 00:08:43.151 fused_ordering(21) 00:08:43.151 fused_ordering(22) 00:08:43.151 fused_ordering(23) 00:08:43.151 fused_ordering(24) 00:08:43.151 fused_ordering(25) 00:08:43.151 fused_ordering(26) 00:08:43.151 fused_ordering(27) 00:08:43.151 fused_ordering(28) 00:08:43.151 fused_ordering(29) 00:08:43.151 fused_ordering(30) 00:08:43.151 fused_ordering(31) 00:08:43.151 fused_ordering(32) 00:08:43.151 fused_ordering(33) 00:08:43.151 fused_ordering(34) 00:08:43.151 fused_ordering(35) 00:08:43.151 fused_ordering(36) 00:08:43.151 fused_ordering(37) 00:08:43.151 fused_ordering(38) 00:08:43.151 fused_ordering(39) 00:08:43.151 fused_ordering(40) 00:08:43.151 fused_ordering(41) 00:08:43.151 fused_ordering(42) 00:08:43.151 fused_ordering(43) 00:08:43.151 fused_ordering(44) 00:08:43.151 fused_ordering(45) 00:08:43.151 fused_ordering(46) 00:08:43.151 fused_ordering(47) 00:08:43.151 fused_ordering(48) 00:08:43.151 fused_ordering(49) 00:08:43.151 fused_ordering(50) 00:08:43.151 fused_ordering(51) 00:08:43.151 fused_ordering(52) 00:08:43.151 fused_ordering(53) 00:08:43.151 fused_ordering(54) 00:08:43.151 fused_ordering(55) 00:08:43.151 fused_ordering(56) 00:08:43.151 fused_ordering(57) 00:08:43.151 fused_ordering(58) 00:08:43.151 fused_ordering(59) 00:08:43.151 fused_ordering(60) 00:08:43.151 fused_ordering(61) 00:08:43.151 fused_ordering(62) 00:08:43.151 fused_ordering(63) 00:08:43.151 fused_ordering(64) 00:08:43.151 fused_ordering(65) 00:08:43.151 fused_ordering(66) 00:08:43.151 fused_ordering(67) 00:08:43.151 fused_ordering(68) 00:08:43.151 fused_ordering(69) 00:08:43.151 fused_ordering(70) 00:08:43.151 fused_ordering(71) 00:08:43.151 fused_ordering(72) 00:08:43.151 fused_ordering(73) 00:08:43.151 fused_ordering(74) 00:08:43.151 fused_ordering(75) 00:08:43.151 fused_ordering(76) 00:08:43.151 fused_ordering(77) 00:08:43.151 fused_ordering(78) 00:08:43.151 fused_ordering(79) 00:08:43.151 fused_ordering(80) 00:08:43.151 fused_ordering(81) 00:08:43.151 fused_ordering(82) 00:08:43.151 fused_ordering(83) 00:08:43.151 fused_ordering(84) 00:08:43.151 fused_ordering(85) 00:08:43.151 fused_ordering(86) 00:08:43.151 fused_ordering(87) 00:08:43.151 fused_ordering(88) 00:08:43.151 fused_ordering(89) 00:08:43.151 fused_ordering(90) 00:08:43.151 fused_ordering(91) 00:08:43.151 fused_ordering(92) 00:08:43.151 fused_ordering(93) 00:08:43.151 fused_ordering(94) 00:08:43.151 fused_ordering(95) 00:08:43.151 fused_ordering(96) 00:08:43.151 fused_ordering(97) 00:08:43.151 fused_ordering(98) 00:08:43.151 fused_ordering(99) 00:08:43.151 fused_ordering(100) 00:08:43.151 fused_ordering(101) 00:08:43.151 fused_ordering(102) 00:08:43.151 fused_ordering(103) 00:08:43.151 fused_ordering(104) 00:08:43.151 fused_ordering(105) 00:08:43.151 fused_ordering(106) 00:08:43.151 fused_ordering(107) 00:08:43.151 fused_ordering(108) 00:08:43.151 fused_ordering(109) 00:08:43.151 fused_ordering(110) 00:08:43.151 fused_ordering(111) 00:08:43.151 fused_ordering(112) 00:08:43.151 fused_ordering(113) 00:08:43.151 fused_ordering(114) 00:08:43.151 fused_ordering(115) 00:08:43.151 fused_ordering(116) 00:08:43.151 fused_ordering(117) 00:08:43.151 fused_ordering(118) 00:08:43.151 fused_ordering(119) 00:08:43.151 fused_ordering(120) 00:08:43.151 fused_ordering(121) 00:08:43.151 fused_ordering(122) 00:08:43.151 fused_ordering(123) 00:08:43.151 fused_ordering(124) 00:08:43.151 fused_ordering(125) 00:08:43.151 fused_ordering(126) 00:08:43.151 fused_ordering(127) 00:08:43.151 fused_ordering(128) 00:08:43.151 fused_ordering(129) 00:08:43.151 fused_ordering(130) 00:08:43.151 fused_ordering(131) 00:08:43.151 fused_ordering(132) 00:08:43.151 fused_ordering(133) 00:08:43.151 fused_ordering(134) 00:08:43.151 fused_ordering(135) 00:08:43.151 fused_ordering(136) 00:08:43.151 fused_ordering(137) 00:08:43.151 fused_ordering(138) 00:08:43.151 fused_ordering(139) 00:08:43.151 fused_ordering(140) 00:08:43.151 fused_ordering(141) 00:08:43.151 fused_ordering(142) 00:08:43.151 fused_ordering(143) 00:08:43.151 fused_ordering(144) 00:08:43.151 fused_ordering(145) 00:08:43.151 fused_ordering(146) 00:08:43.151 fused_ordering(147) 00:08:43.151 fused_ordering(148) 00:08:43.151 fused_ordering(149) 00:08:43.151 fused_ordering(150) 00:08:43.151 fused_ordering(151) 00:08:43.151 fused_ordering(152) 00:08:43.151 fused_ordering(153) 00:08:43.151 fused_ordering(154) 00:08:43.151 fused_ordering(155) 00:08:43.151 fused_ordering(156) 00:08:43.151 fused_ordering(157) 00:08:43.151 fused_ordering(158) 00:08:43.151 fused_ordering(159) 00:08:43.151 fused_ordering(160) 00:08:43.151 fused_ordering(161) 00:08:43.151 fused_ordering(162) 00:08:43.151 fused_ordering(163) 00:08:43.151 fused_ordering(164) 00:08:43.151 fused_ordering(165) 00:08:43.151 fused_ordering(166) 00:08:43.151 fused_ordering(167) 00:08:43.151 fused_ordering(168) 00:08:43.151 fused_ordering(169) 00:08:43.151 fused_ordering(170) 00:08:43.151 fused_ordering(171) 00:08:43.151 fused_ordering(172) 00:08:43.151 fused_ordering(173) 00:08:43.151 fused_ordering(174) 00:08:43.151 fused_ordering(175) 00:08:43.151 fused_ordering(176) 00:08:43.151 fused_ordering(177) 00:08:43.151 fused_ordering(178) 00:08:43.151 fused_ordering(179) 00:08:43.151 fused_ordering(180) 00:08:43.151 fused_ordering(181) 00:08:43.151 fused_ordering(182) 00:08:43.151 fused_ordering(183) 00:08:43.151 fused_ordering(184) 00:08:43.151 fused_ordering(185) 00:08:43.151 fused_ordering(186) 00:08:43.151 fused_ordering(187) 00:08:43.151 fused_ordering(188) 00:08:43.151 fused_ordering(189) 00:08:43.151 fused_ordering(190) 00:08:43.151 fused_ordering(191) 00:08:43.151 fused_ordering(192) 00:08:43.151 fused_ordering(193) 00:08:43.151 fused_ordering(194) 00:08:43.151 fused_ordering(195) 00:08:43.151 fused_ordering(196) 00:08:43.151 fused_ordering(197) 00:08:43.151 fused_ordering(198) 00:08:43.151 fused_ordering(199) 00:08:43.151 fused_ordering(200) 00:08:43.151 fused_ordering(201) 00:08:43.151 fused_ordering(202) 00:08:43.151 fused_ordering(203) 00:08:43.151 fused_ordering(204) 00:08:43.151 fused_ordering(205) 00:08:43.721 fused_ordering(206) 00:08:43.721 fused_ordering(207) 00:08:43.721 fused_ordering(208) 00:08:43.721 fused_ordering(209) 00:08:43.721 fused_ordering(210) 00:08:43.721 fused_ordering(211) 00:08:43.721 fused_ordering(212) 00:08:43.721 fused_ordering(213) 00:08:43.721 fused_ordering(214) 00:08:43.721 fused_ordering(215) 00:08:43.721 fused_ordering(216) 00:08:43.721 fused_ordering(217) 00:08:43.721 fused_ordering(218) 00:08:43.721 fused_ordering(219) 00:08:43.721 fused_ordering(220) 00:08:43.721 fused_ordering(221) 00:08:43.721 fused_ordering(222) 00:08:43.721 fused_ordering(223) 00:08:43.721 fused_ordering(224) 00:08:43.721 fused_ordering(225) 00:08:43.721 fused_ordering(226) 00:08:43.721 fused_ordering(227) 00:08:43.721 fused_ordering(228) 00:08:43.721 fused_ordering(229) 00:08:43.721 fused_ordering(230) 00:08:43.721 fused_ordering(231) 00:08:43.721 fused_ordering(232) 00:08:43.721 fused_ordering(233) 00:08:43.721 fused_ordering(234) 00:08:43.721 fused_ordering(235) 00:08:43.721 fused_ordering(236) 00:08:43.721 fused_ordering(237) 00:08:43.721 fused_ordering(238) 00:08:43.721 fused_ordering(239) 00:08:43.721 fused_ordering(240) 00:08:43.721 fused_ordering(241) 00:08:43.721 fused_ordering(242) 00:08:43.721 fused_ordering(243) 00:08:43.721 fused_ordering(244) 00:08:43.721 fused_ordering(245) 00:08:43.721 fused_ordering(246) 00:08:43.721 fused_ordering(247) 00:08:43.721 fused_ordering(248) 00:08:43.721 fused_ordering(249) 00:08:43.721 fused_ordering(250) 00:08:43.721 fused_ordering(251) 00:08:43.721 fused_ordering(252) 00:08:43.721 fused_ordering(253) 00:08:43.721 fused_ordering(254) 00:08:43.721 fused_ordering(255) 00:08:43.721 fused_ordering(256) 00:08:43.721 fused_ordering(257) 00:08:43.721 fused_ordering(258) 00:08:43.721 fused_ordering(259) 00:08:43.721 fused_ordering(260) 00:08:43.721 fused_ordering(261) 00:08:43.721 fused_ordering(262) 00:08:43.721 fused_ordering(263) 00:08:43.721 fused_ordering(264) 00:08:43.721 fused_ordering(265) 00:08:43.721 fused_ordering(266) 00:08:43.721 fused_ordering(267) 00:08:43.721 fused_ordering(268) 00:08:43.721 fused_ordering(269) 00:08:43.721 fused_ordering(270) 00:08:43.721 fused_ordering(271) 00:08:43.721 fused_ordering(272) 00:08:43.721 fused_ordering(273) 00:08:43.721 fused_ordering(274) 00:08:43.721 fused_ordering(275) 00:08:43.721 fused_ordering(276) 00:08:43.721 fused_ordering(277) 00:08:43.721 fused_ordering(278) 00:08:43.721 fused_ordering(279) 00:08:43.721 fused_ordering(280) 00:08:43.721 fused_ordering(281) 00:08:43.721 fused_ordering(282) 00:08:43.721 fused_ordering(283) 00:08:43.721 fused_ordering(284) 00:08:43.721 fused_ordering(285) 00:08:43.721 fused_ordering(286) 00:08:43.721 fused_ordering(287) 00:08:43.721 fused_ordering(288) 00:08:43.721 fused_ordering(289) 00:08:43.721 fused_ordering(290) 00:08:43.721 fused_ordering(291) 00:08:43.721 fused_ordering(292) 00:08:43.721 fused_ordering(293) 00:08:43.721 fused_ordering(294) 00:08:43.721 fused_ordering(295) 00:08:43.721 fused_ordering(296) 00:08:43.721 fused_ordering(297) 00:08:43.721 fused_ordering(298) 00:08:43.721 fused_ordering(299) 00:08:43.721 fused_ordering(300) 00:08:43.721 fused_ordering(301) 00:08:43.721 fused_ordering(302) 00:08:43.721 fused_ordering(303) 00:08:43.721 fused_ordering(304) 00:08:43.721 fused_ordering(305) 00:08:43.721 fused_ordering(306) 00:08:43.721 fused_ordering(307) 00:08:43.721 fused_ordering(308) 00:08:43.721 fused_ordering(309) 00:08:43.721 fused_ordering(310) 00:08:43.721 fused_ordering(311) 00:08:43.721 fused_ordering(312) 00:08:43.721 fused_ordering(313) 00:08:43.721 fused_ordering(314) 00:08:43.721 fused_ordering(315) 00:08:43.721 fused_ordering(316) 00:08:43.721 fused_ordering(317) 00:08:43.721 fused_ordering(318) 00:08:43.721 fused_ordering(319) 00:08:43.721 fused_ordering(320) 00:08:43.721 fused_ordering(321) 00:08:43.721 fused_ordering(322) 00:08:43.721 fused_ordering(323) 00:08:43.721 fused_ordering(324) 00:08:43.721 fused_ordering(325) 00:08:43.721 fused_ordering(326) 00:08:43.721 fused_ordering(327) 00:08:43.721 fused_ordering(328) 00:08:43.721 fused_ordering(329) 00:08:43.721 fused_ordering(330) 00:08:43.721 fused_ordering(331) 00:08:43.721 fused_ordering(332) 00:08:43.721 fused_ordering(333) 00:08:43.721 fused_ordering(334) 00:08:43.721 fused_ordering(335) 00:08:43.721 fused_ordering(336) 00:08:43.721 fused_ordering(337) 00:08:43.721 fused_ordering(338) 00:08:43.721 fused_ordering(339) 00:08:43.721 fused_ordering(340) 00:08:43.721 fused_ordering(341) 00:08:43.721 fused_ordering(342) 00:08:43.721 fused_ordering(343) 00:08:43.721 fused_ordering(344) 00:08:43.721 fused_ordering(345) 00:08:43.721 fused_ordering(346) 00:08:43.721 fused_ordering(347) 00:08:43.721 fused_ordering(348) 00:08:43.721 fused_ordering(349) 00:08:43.721 fused_ordering(350) 00:08:43.721 fused_ordering(351) 00:08:43.721 fused_ordering(352) 00:08:43.721 fused_ordering(353) 00:08:43.721 fused_ordering(354) 00:08:43.721 fused_ordering(355) 00:08:43.721 fused_ordering(356) 00:08:43.721 fused_ordering(357) 00:08:43.721 fused_ordering(358) 00:08:43.721 fused_ordering(359) 00:08:43.721 fused_ordering(360) 00:08:43.721 fused_ordering(361) 00:08:43.721 fused_ordering(362) 00:08:43.721 fused_ordering(363) 00:08:43.721 fused_ordering(364) 00:08:43.721 fused_ordering(365) 00:08:43.721 fused_ordering(366) 00:08:43.721 fused_ordering(367) 00:08:43.721 fused_ordering(368) 00:08:43.721 fused_ordering(369) 00:08:43.721 fused_ordering(370) 00:08:43.721 fused_ordering(371) 00:08:43.721 fused_ordering(372) 00:08:43.721 fused_ordering(373) 00:08:43.721 fused_ordering(374) 00:08:43.721 fused_ordering(375) 00:08:43.721 fused_ordering(376) 00:08:43.721 fused_ordering(377) 00:08:43.721 fused_ordering(378) 00:08:43.721 fused_ordering(379) 00:08:43.721 fused_ordering(380) 00:08:43.721 fused_ordering(381) 00:08:43.721 fused_ordering(382) 00:08:43.721 fused_ordering(383) 00:08:43.721 fused_ordering(384) 00:08:43.721 fused_ordering(385) 00:08:43.721 fused_ordering(386) 00:08:43.721 fused_ordering(387) 00:08:43.721 fused_ordering(388) 00:08:43.721 fused_ordering(389) 00:08:43.721 fused_ordering(390) 00:08:43.721 fused_ordering(391) 00:08:43.721 fused_ordering(392) 00:08:43.722 fused_ordering(393) 00:08:43.722 fused_ordering(394) 00:08:43.722 fused_ordering(395) 00:08:43.722 fused_ordering(396) 00:08:43.722 fused_ordering(397) 00:08:43.722 fused_ordering(398) 00:08:43.722 fused_ordering(399) 00:08:43.722 fused_ordering(400) 00:08:43.722 fused_ordering(401) 00:08:43.722 fused_ordering(402) 00:08:43.722 fused_ordering(403) 00:08:43.722 fused_ordering(404) 00:08:43.722 fused_ordering(405) 00:08:43.722 fused_ordering(406) 00:08:43.722 fused_ordering(407) 00:08:43.722 fused_ordering(408) 00:08:43.722 fused_ordering(409) 00:08:43.722 fused_ordering(410) 00:08:44.660 fused_ordering(411) 00:08:44.660 fused_ordering(412) 00:08:44.660 fused_ordering(413) 00:08:44.660 fused_ordering(414) 00:08:44.660 fused_ordering(415) 00:08:44.660 fused_ordering(416) 00:08:44.660 fused_ordering(417) 00:08:44.660 fused_ordering(418) 00:08:44.660 fused_ordering(419) 00:08:44.660 fused_ordering(420) 00:08:44.660 fused_ordering(421) 00:08:44.660 fused_ordering(422) 00:08:44.660 fused_ordering(423) 00:08:44.660 fused_ordering(424) 00:08:44.660 fused_ordering(425) 00:08:44.660 fused_ordering(426) 00:08:44.660 fused_ordering(427) 00:08:44.660 fused_ordering(428) 00:08:44.660 fused_ordering(429) 00:08:44.660 fused_ordering(430) 00:08:44.660 fused_ordering(431) 00:08:44.660 fused_ordering(432) 00:08:44.660 fused_ordering(433) 00:08:44.660 fused_ordering(434) 00:08:44.660 fused_ordering(435) 00:08:44.660 fused_ordering(436) 00:08:44.660 fused_ordering(437) 00:08:44.660 fused_ordering(438) 00:08:44.660 fused_ordering(439) 00:08:44.660 fused_ordering(440) 00:08:44.660 fused_ordering(441) 00:08:44.660 fused_ordering(442) 00:08:44.660 fused_ordering(443) 00:08:44.660 fused_ordering(444) 00:08:44.660 fused_ordering(445) 00:08:44.660 fused_ordering(446) 00:08:44.660 fused_ordering(447) 00:08:44.660 fused_ordering(448) 00:08:44.660 fused_ordering(449) 00:08:44.660 fused_ordering(450) 00:08:44.660 fused_ordering(451) 00:08:44.660 fused_ordering(452) 00:08:44.660 fused_ordering(453) 00:08:44.660 fused_ordering(454) 00:08:44.660 fused_ordering(455) 00:08:44.660 fused_ordering(456) 00:08:44.660 fused_ordering(457) 00:08:44.660 fused_ordering(458) 00:08:44.660 fused_ordering(459) 00:08:44.660 fused_ordering(460) 00:08:44.660 fused_ordering(461) 00:08:44.660 fused_ordering(462) 00:08:44.660 fused_ordering(463) 00:08:44.660 fused_ordering(464) 00:08:44.660 fused_ordering(465) 00:08:44.660 fused_ordering(466) 00:08:44.660 fused_ordering(467) 00:08:44.660 fused_ordering(468) 00:08:44.660 fused_ordering(469) 00:08:44.660 fused_ordering(470) 00:08:44.660 fused_ordering(471) 00:08:44.660 fused_ordering(472) 00:08:44.660 fused_ordering(473) 00:08:44.660 fused_ordering(474) 00:08:44.660 fused_ordering(475) 00:08:44.660 fused_ordering(476) 00:08:44.660 fused_ordering(477) 00:08:44.660 fused_ordering(478) 00:08:44.660 fused_ordering(479) 00:08:44.660 fused_ordering(480) 00:08:44.660 fused_ordering(481) 00:08:44.660 fused_ordering(482) 00:08:44.660 fused_ordering(483) 00:08:44.660 fused_ordering(484) 00:08:44.660 fused_ordering(485) 00:08:44.660 fused_ordering(486) 00:08:44.660 fused_ordering(487) 00:08:44.660 fused_ordering(488) 00:08:44.660 fused_ordering(489) 00:08:44.660 fused_ordering(490) 00:08:44.660 fused_ordering(491) 00:08:44.660 fused_ordering(492) 00:08:44.660 fused_ordering(493) 00:08:44.660 fused_ordering(494) 00:08:44.660 fused_ordering(495) 00:08:44.660 fused_ordering(496) 00:08:44.660 fused_ordering(497) 00:08:44.660 fused_ordering(498) 00:08:44.660 fused_ordering(499) 00:08:44.660 fused_ordering(500) 00:08:44.660 fused_ordering(501) 00:08:44.660 fused_ordering(502) 00:08:44.660 fused_ordering(503) 00:08:44.660 fused_ordering(504) 00:08:44.660 fused_ordering(505) 00:08:44.660 fused_ordering(506) 00:08:44.660 fused_ordering(507) 00:08:44.660 fused_ordering(508) 00:08:44.660 fused_ordering(509) 00:08:44.660 fused_ordering(510) 00:08:44.660 fused_ordering(511) 00:08:44.660 fused_ordering(512) 00:08:44.660 fused_ordering(513) 00:08:44.660 fused_ordering(514) 00:08:44.660 fused_ordering(515) 00:08:44.660 fused_ordering(516) 00:08:44.660 fused_ordering(517) 00:08:44.660 fused_ordering(518) 00:08:44.660 fused_ordering(519) 00:08:44.660 fused_ordering(520) 00:08:44.660 fused_ordering(521) 00:08:44.660 fused_ordering(522) 00:08:44.660 fused_ordering(523) 00:08:44.660 fused_ordering(524) 00:08:44.660 fused_ordering(525) 00:08:44.660 fused_ordering(526) 00:08:44.660 fused_ordering(527) 00:08:44.660 fused_ordering(528) 00:08:44.660 fused_ordering(529) 00:08:44.660 fused_ordering(530) 00:08:44.660 fused_ordering(531) 00:08:44.660 fused_ordering(532) 00:08:44.660 fused_ordering(533) 00:08:44.660 fused_ordering(534) 00:08:44.660 fused_ordering(535) 00:08:44.660 fused_ordering(536) 00:08:44.660 fused_ordering(537) 00:08:44.660 fused_ordering(538) 00:08:44.660 fused_ordering(539) 00:08:44.660 fused_ordering(540) 00:08:44.660 fused_ordering(541) 00:08:44.660 fused_ordering(542) 00:08:44.660 fused_ordering(543) 00:08:44.660 fused_ordering(544) 00:08:44.660 fused_ordering(545) 00:08:44.660 fused_ordering(546) 00:08:44.660 fused_ordering(547) 00:08:44.660 fused_ordering(548) 00:08:44.660 fused_ordering(549) 00:08:44.660 fused_ordering(550) 00:08:44.660 fused_ordering(551) 00:08:44.660 fused_ordering(552) 00:08:44.660 fused_ordering(553) 00:08:44.660 fused_ordering(554) 00:08:44.660 fused_ordering(555) 00:08:44.660 fused_ordering(556) 00:08:44.660 fused_ordering(557) 00:08:44.661 fused_ordering(558) 00:08:44.661 fused_ordering(559) 00:08:44.661 fused_ordering(560) 00:08:44.661 fused_ordering(561) 00:08:44.661 fused_ordering(562) 00:08:44.661 fused_ordering(563) 00:08:44.661 fused_ordering(564) 00:08:44.661 fused_ordering(565) 00:08:44.661 fused_ordering(566) 00:08:44.661 fused_ordering(567) 00:08:44.661 fused_ordering(568) 00:08:44.661 fused_ordering(569) 00:08:44.661 fused_ordering(570) 00:08:44.661 fused_ordering(571) 00:08:44.661 fused_ordering(572) 00:08:44.661 fused_ordering(573) 00:08:44.661 fused_ordering(574) 00:08:44.661 fused_ordering(575) 00:08:44.661 fused_ordering(576) 00:08:44.661 fused_ordering(577) 00:08:44.661 fused_ordering(578) 00:08:44.661 fused_ordering(579) 00:08:44.661 fused_ordering(580) 00:08:44.661 fused_ordering(581) 00:08:44.661 fused_ordering(582) 00:08:44.661 fused_ordering(583) 00:08:44.661 fused_ordering(584) 00:08:44.661 fused_ordering(585) 00:08:44.661 fused_ordering(586) 00:08:44.661 fused_ordering(587) 00:08:44.661 fused_ordering(588) 00:08:44.661 fused_ordering(589) 00:08:44.661 fused_ordering(590) 00:08:44.661 fused_ordering(591) 00:08:44.661 fused_ordering(592) 00:08:44.661 fused_ordering(593) 00:08:44.661 fused_ordering(594) 00:08:44.661 fused_ordering(595) 00:08:44.661 fused_ordering(596) 00:08:44.661 fused_ordering(597) 00:08:44.661 fused_ordering(598) 00:08:44.661 fused_ordering(599) 00:08:44.661 fused_ordering(600) 00:08:44.661 fused_ordering(601) 00:08:44.661 fused_ordering(602) 00:08:44.661 fused_ordering(603) 00:08:44.661 fused_ordering(604) 00:08:44.661 fused_ordering(605) 00:08:44.661 fused_ordering(606) 00:08:44.661 fused_ordering(607) 00:08:44.661 fused_ordering(608) 00:08:44.661 fused_ordering(609) 00:08:44.661 fused_ordering(610) 00:08:44.661 fused_ordering(611) 00:08:44.661 fused_ordering(612) 00:08:44.661 fused_ordering(613) 00:08:44.661 fused_ordering(614) 00:08:44.661 fused_ordering(615) 00:08:45.229 fused_ordering(616) 00:08:45.229 fused_ordering(617) 00:08:45.229 fused_ordering(618) 00:08:45.229 fused_ordering(619) 00:08:45.229 fused_ordering(620) 00:08:45.229 fused_ordering(621) 00:08:45.229 fused_ordering(622) 00:08:45.229 fused_ordering(623) 00:08:45.229 fused_ordering(624) 00:08:45.229 fused_ordering(625) 00:08:45.229 fused_ordering(626) 00:08:45.229 fused_ordering(627) 00:08:45.229 fused_ordering(628) 00:08:45.229 fused_ordering(629) 00:08:45.229 fused_ordering(630) 00:08:45.229 fused_ordering(631) 00:08:45.229 fused_ordering(632) 00:08:45.229 fused_ordering(633) 00:08:45.229 fused_ordering(634) 00:08:45.229 fused_ordering(635) 00:08:45.229 fused_ordering(636) 00:08:45.229 fused_ordering(637) 00:08:45.229 fused_ordering(638) 00:08:45.229 fused_ordering(639) 00:08:45.229 fused_ordering(640) 00:08:45.229 fused_ordering(641) 00:08:45.229 fused_ordering(642) 00:08:45.229 fused_ordering(643) 00:08:45.229 fused_ordering(644) 00:08:45.229 fused_ordering(645) 00:08:45.229 fused_ordering(646) 00:08:45.229 fused_ordering(647) 00:08:45.229 fused_ordering(648) 00:08:45.229 fused_ordering(649) 00:08:45.229 fused_ordering(650) 00:08:45.229 fused_ordering(651) 00:08:45.229 fused_ordering(652) 00:08:45.229 fused_ordering(653) 00:08:45.229 fused_ordering(654) 00:08:45.229 fused_ordering(655) 00:08:45.229 fused_ordering(656) 00:08:45.229 fused_ordering(657) 00:08:45.229 fused_ordering(658) 00:08:45.229 fused_ordering(659) 00:08:45.229 fused_ordering(660) 00:08:45.229 fused_ordering(661) 00:08:45.229 fused_ordering(662) 00:08:45.229 fused_ordering(663) 00:08:45.229 fused_ordering(664) 00:08:45.229 fused_ordering(665) 00:08:45.229 fused_ordering(666) 00:08:45.229 fused_ordering(667) 00:08:45.229 fused_ordering(668) 00:08:45.229 fused_ordering(669) 00:08:45.229 fused_ordering(670) 00:08:45.229 fused_ordering(671) 00:08:45.229 fused_ordering(672) 00:08:45.229 fused_ordering(673) 00:08:45.229 fused_ordering(674) 00:08:45.229 fused_ordering(675) 00:08:45.229 fused_ordering(676) 00:08:45.229 fused_ordering(677) 00:08:45.229 fused_ordering(678) 00:08:45.229 fused_ordering(679) 00:08:45.229 fused_ordering(680) 00:08:45.229 fused_ordering(681) 00:08:45.229 fused_ordering(682) 00:08:45.229 fused_ordering(683) 00:08:45.229 fused_ordering(684) 00:08:45.229 fused_ordering(685) 00:08:45.229 fused_ordering(686) 00:08:45.229 fused_ordering(687) 00:08:45.229 fused_ordering(688) 00:08:45.229 fused_ordering(689) 00:08:45.229 fused_ordering(690) 00:08:45.229 fused_ordering(691) 00:08:45.229 fused_ordering(692) 00:08:45.229 fused_ordering(693) 00:08:45.229 fused_ordering(694) 00:08:45.229 fused_ordering(695) 00:08:45.229 fused_ordering(696) 00:08:45.229 fused_ordering(697) 00:08:45.229 fused_ordering(698) 00:08:45.229 fused_ordering(699) 00:08:45.229 fused_ordering(700) 00:08:45.229 fused_ordering(701) 00:08:45.229 fused_ordering(702) 00:08:45.229 fused_ordering(703) 00:08:45.229 fused_ordering(704) 00:08:45.229 fused_ordering(705) 00:08:45.229 fused_ordering(706) 00:08:45.229 fused_ordering(707) 00:08:45.229 fused_ordering(708) 00:08:45.229 fused_ordering(709) 00:08:45.229 fused_ordering(710) 00:08:45.229 fused_ordering(711) 00:08:45.229 fused_ordering(712) 00:08:45.229 fused_ordering(713) 00:08:45.229 fused_ordering(714) 00:08:45.229 fused_ordering(715) 00:08:45.229 fused_ordering(716) 00:08:45.229 fused_ordering(717) 00:08:45.229 fused_ordering(718) 00:08:45.229 fused_ordering(719) 00:08:45.229 fused_ordering(720) 00:08:45.229 fused_ordering(721) 00:08:45.229 fused_ordering(722) 00:08:45.229 fused_ordering(723) 00:08:45.229 fused_ordering(724) 00:08:45.229 fused_ordering(725) 00:08:45.229 fused_ordering(726) 00:08:45.229 fused_ordering(727) 00:08:45.229 fused_ordering(728) 00:08:45.229 fused_ordering(729) 00:08:45.229 fused_ordering(730) 00:08:45.229 fused_ordering(731) 00:08:45.229 fused_ordering(732) 00:08:45.229 fused_ordering(733) 00:08:45.229 fused_ordering(734) 00:08:45.229 fused_ordering(735) 00:08:45.229 fused_ordering(736) 00:08:45.230 fused_ordering(737) 00:08:45.230 fused_ordering(738) 00:08:45.230 fused_ordering(739) 00:08:45.230 fused_ordering(740) 00:08:45.230 fused_ordering(741) 00:08:45.230 fused_ordering(742) 00:08:45.230 fused_ordering(743) 00:08:45.230 fused_ordering(744) 00:08:45.230 fused_ordering(745) 00:08:45.230 fused_ordering(746) 00:08:45.230 fused_ordering(747) 00:08:45.230 fused_ordering(748) 00:08:45.230 fused_ordering(749) 00:08:45.230 fused_ordering(750) 00:08:45.230 fused_ordering(751) 00:08:45.230 fused_ordering(752) 00:08:45.230 fused_ordering(753) 00:08:45.230 fused_ordering(754) 00:08:45.230 fused_ordering(755) 00:08:45.230 fused_ordering(756) 00:08:45.230 fused_ordering(757) 00:08:45.230 fused_ordering(758) 00:08:45.230 fused_ordering(759) 00:08:45.230 fused_ordering(760) 00:08:45.230 fused_ordering(761) 00:08:45.230 fused_ordering(762) 00:08:45.230 fused_ordering(763) 00:08:45.230 fused_ordering(764) 00:08:45.230 fused_ordering(765) 00:08:45.230 fused_ordering(766) 00:08:45.230 fused_ordering(767) 00:08:45.230 fused_ordering(768) 00:08:45.230 fused_ordering(769) 00:08:45.230 fused_ordering(770) 00:08:45.230 fused_ordering(771) 00:08:45.230 fused_ordering(772) 00:08:45.230 fused_ordering(773) 00:08:45.230 fused_ordering(774) 00:08:45.230 fused_ordering(775) 00:08:45.230 fused_ordering(776) 00:08:45.230 fused_ordering(777) 00:08:45.230 fused_ordering(778) 00:08:45.230 fused_ordering(779) 00:08:45.230 fused_ordering(780) 00:08:45.230 fused_ordering(781) 00:08:45.230 fused_ordering(782) 00:08:45.230 fused_ordering(783) 00:08:45.230 fused_ordering(784) 00:08:45.230 fused_ordering(785) 00:08:45.230 fused_ordering(786) 00:08:45.230 fused_ordering(787) 00:08:45.230 fused_ordering(788) 00:08:45.230 fused_ordering(789) 00:08:45.230 fused_ordering(790) 00:08:45.230 fused_ordering(791) 00:08:45.230 fused_ordering(792) 00:08:45.230 fused_ordering(793) 00:08:45.230 fused_ordering(794) 00:08:45.230 fused_ordering(795) 00:08:45.230 fused_ordering(796) 00:08:45.230 fused_ordering(797) 00:08:45.230 fused_ordering(798) 00:08:45.230 fused_ordering(799) 00:08:45.230 fused_ordering(800) 00:08:45.230 fused_ordering(801) 00:08:45.230 fused_ordering(802) 00:08:45.230 fused_ordering(803) 00:08:45.230 fused_ordering(804) 00:08:45.230 fused_ordering(805) 00:08:45.230 fused_ordering(806) 00:08:45.230 fused_ordering(807) 00:08:45.230 fused_ordering(808) 00:08:45.230 fused_ordering(809) 00:08:45.230 fused_ordering(810) 00:08:45.230 fused_ordering(811) 00:08:45.230 fused_ordering(812) 00:08:45.230 fused_ordering(813) 00:08:45.230 fused_ordering(814) 00:08:45.230 fused_ordering(815) 00:08:45.230 fused_ordering(816) 00:08:45.230 fused_ordering(817) 00:08:45.230 fused_ordering(818) 00:08:45.230 fused_ordering(819) 00:08:45.230 fused_ordering(820) 00:08:46.165 fused_ordering(821) 00:08:46.165 fused_ordering(822) 00:08:46.165 fused_ordering(823) 00:08:46.165 fused_ordering(824) 00:08:46.165 fused_ordering(825) 00:08:46.165 fused_ordering(826) 00:08:46.165 fused_ordering(827) 00:08:46.165 fused_ordering(828) 00:08:46.165 fused_ordering(829) 00:08:46.165 fused_ordering(830) 00:08:46.165 fused_ordering(831) 00:08:46.165 fused_ordering(832) 00:08:46.165 fused_ordering(833) 00:08:46.165 fused_ordering(834) 00:08:46.165 fused_ordering(835) 00:08:46.165 fused_ordering(836) 00:08:46.165 fused_ordering(837) 00:08:46.165 fused_ordering(838) 00:08:46.165 fused_ordering(839) 00:08:46.165 fused_ordering(840) 00:08:46.165 fused_ordering(841) 00:08:46.165 fused_ordering(842) 00:08:46.165 fused_ordering(843) 00:08:46.165 fused_ordering(844) 00:08:46.165 fused_ordering(845) 00:08:46.165 fused_ordering(846) 00:08:46.165 fused_ordering(847) 00:08:46.165 fused_ordering(848) 00:08:46.165 fused_ordering(849) 00:08:46.165 fused_ordering(850) 00:08:46.165 fused_ordering(851) 00:08:46.165 fused_ordering(852) 00:08:46.165 fused_ordering(853) 00:08:46.165 fused_ordering(854) 00:08:46.165 fused_ordering(855) 00:08:46.165 fused_ordering(856) 00:08:46.165 fused_ordering(857) 00:08:46.165 fused_ordering(858) 00:08:46.165 fused_ordering(859) 00:08:46.165 fused_ordering(860) 00:08:46.165 fused_ordering(861) 00:08:46.165 fused_ordering(862) 00:08:46.165 fused_ordering(863) 00:08:46.165 fused_ordering(864) 00:08:46.165 fused_ordering(865) 00:08:46.165 fused_ordering(866) 00:08:46.165 fused_ordering(867) 00:08:46.165 fused_ordering(868) 00:08:46.165 fused_ordering(869) 00:08:46.165 fused_ordering(870) 00:08:46.165 fused_ordering(871) 00:08:46.165 fused_ordering(872) 00:08:46.165 fused_ordering(873) 00:08:46.165 fused_ordering(874) 00:08:46.165 fused_ordering(875) 00:08:46.165 fused_ordering(876) 00:08:46.165 fused_ordering(877) 00:08:46.165 fused_ordering(878) 00:08:46.165 fused_ordering(879) 00:08:46.165 fused_ordering(880) 00:08:46.165 fused_ordering(881) 00:08:46.165 fused_ordering(882) 00:08:46.165 fused_ordering(883) 00:08:46.165 fused_ordering(884) 00:08:46.165 fused_ordering(885) 00:08:46.165 fused_ordering(886) 00:08:46.165 fused_ordering(887) 00:08:46.165 fused_ordering(888) 00:08:46.165 fused_ordering(889) 00:08:46.165 fused_ordering(890) 00:08:46.165 fused_ordering(891) 00:08:46.165 fused_ordering(892) 00:08:46.165 fused_ordering(893) 00:08:46.165 fused_ordering(894) 00:08:46.165 fused_ordering(895) 00:08:46.165 fused_ordering(896) 00:08:46.165 fused_ordering(897) 00:08:46.165 fused_ordering(898) 00:08:46.165 fused_ordering(899) 00:08:46.165 fused_ordering(900) 00:08:46.165 fused_ordering(901) 00:08:46.165 fused_ordering(902) 00:08:46.165 fused_ordering(903) 00:08:46.165 fused_ordering(904) 00:08:46.165 fused_ordering(905) 00:08:46.165 fused_ordering(906) 00:08:46.165 fused_ordering(907) 00:08:46.165 fused_ordering(908) 00:08:46.165 fused_ordering(909) 00:08:46.165 fused_ordering(910) 00:08:46.165 fused_ordering(911) 00:08:46.165 fused_ordering(912) 00:08:46.165 fused_ordering(913) 00:08:46.165 fused_ordering(914) 00:08:46.165 fused_ordering(915) 00:08:46.165 fused_ordering(916) 00:08:46.165 fused_ordering(917) 00:08:46.165 fused_ordering(918) 00:08:46.165 fused_ordering(919) 00:08:46.165 fused_ordering(920) 00:08:46.165 fused_ordering(921) 00:08:46.166 fused_ordering(922) 00:08:46.166 fused_ordering(923) 00:08:46.166 fused_ordering(924) 00:08:46.166 fused_ordering(925) 00:08:46.166 fused_ordering(926) 00:08:46.166 fused_ordering(927) 00:08:46.166 fused_ordering(928) 00:08:46.166 fused_ordering(929) 00:08:46.166 fused_ordering(930) 00:08:46.166 fused_ordering(931) 00:08:46.166 fused_ordering(932) 00:08:46.166 fused_ordering(933) 00:08:46.166 fused_ordering(934) 00:08:46.166 fused_ordering(935) 00:08:46.166 fused_ordering(936) 00:08:46.166 fused_ordering(937) 00:08:46.166 fused_ordering(938) 00:08:46.166 fused_ordering(939) 00:08:46.166 fused_ordering(940) 00:08:46.166 fused_ordering(941) 00:08:46.166 fused_ordering(942) 00:08:46.166 fused_ordering(943) 00:08:46.166 fused_ordering(944) 00:08:46.166 fused_ordering(945) 00:08:46.166 fused_ordering(946) 00:08:46.166 fused_ordering(947) 00:08:46.166 fused_ordering(948) 00:08:46.166 fused_ordering(949) 00:08:46.166 fused_ordering(950) 00:08:46.166 fused_ordering(951) 00:08:46.166 fused_ordering(952) 00:08:46.166 fused_ordering(953) 00:08:46.166 fused_ordering(954) 00:08:46.166 fused_ordering(955) 00:08:46.166 fused_ordering(956) 00:08:46.166 fused_ordering(957) 00:08:46.166 fused_ordering(958) 00:08:46.166 fused_ordering(959) 00:08:46.166 fused_ordering(960) 00:08:46.166 fused_ordering(961) 00:08:46.166 fused_ordering(962) 00:08:46.166 fused_ordering(963) 00:08:46.166 fused_ordering(964) 00:08:46.166 fused_ordering(965) 00:08:46.166 fused_ordering(966) 00:08:46.166 fused_ordering(967) 00:08:46.166 fused_ordering(968) 00:08:46.166 fused_ordering(969) 00:08:46.166 fused_ordering(970) 00:08:46.166 fused_ordering(971) 00:08:46.166 fused_ordering(972) 00:08:46.166 fused_ordering(973) 00:08:46.166 fused_ordering(974) 00:08:46.166 fused_ordering(975) 00:08:46.166 fused_ordering(976) 00:08:46.166 fused_ordering(977) 00:08:46.166 fused_ordering(978) 00:08:46.166 fused_ordering(979) 00:08:46.166 fused_ordering(980) 00:08:46.166 fused_ordering(981) 00:08:46.166 fused_ordering(982) 00:08:46.166 fused_ordering(983) 00:08:46.166 fused_ordering(984) 00:08:46.166 fused_ordering(985) 00:08:46.166 fused_ordering(986) 00:08:46.166 fused_ordering(987) 00:08:46.166 fused_ordering(988) 00:08:46.166 fused_ordering(989) 00:08:46.166 fused_ordering(990) 00:08:46.166 fused_ordering(991) 00:08:46.166 fused_ordering(992) 00:08:46.166 fused_ordering(993) 00:08:46.166 fused_ordering(994) 00:08:46.166 fused_ordering(995) 00:08:46.166 fused_ordering(996) 00:08:46.166 fused_ordering(997) 00:08:46.166 fused_ordering(998) 00:08:46.166 fused_ordering(999) 00:08:46.166 fused_ordering(1000) 00:08:46.166 fused_ordering(1001) 00:08:46.166 fused_ordering(1002) 00:08:46.166 fused_ordering(1003) 00:08:46.166 fused_ordering(1004) 00:08:46.166 fused_ordering(1005) 00:08:46.166 fused_ordering(1006) 00:08:46.166 fused_ordering(1007) 00:08:46.166 fused_ordering(1008) 00:08:46.166 fused_ordering(1009) 00:08:46.166 fused_ordering(1010) 00:08:46.166 fused_ordering(1011) 00:08:46.166 fused_ordering(1012) 00:08:46.166 fused_ordering(1013) 00:08:46.166 fused_ordering(1014) 00:08:46.166 fused_ordering(1015) 00:08:46.166 fused_ordering(1016) 00:08:46.166 fused_ordering(1017) 00:08:46.166 fused_ordering(1018) 00:08:46.166 fused_ordering(1019) 00:08:46.166 fused_ordering(1020) 00:08:46.166 fused_ordering(1021) 00:08:46.166 fused_ordering(1022) 00:08:46.166 fused_ordering(1023) 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.166 rmmod nvme_tcp 00:08:46.166 rmmod nvme_fabrics 00:08:46.166 rmmod nvme_keyring 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1189606 ']' 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1189606 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@942 -- # '[' -z 1189606 ']' 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # kill -0 1189606 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # uname 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1189606 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1189606' 00:08:46.166 killing process with pid 1189606 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@961 -- # kill 1189606 00:08:46.166 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # wait 1189606 00:08:46.425 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:46.425 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:46.425 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:46.425 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.425 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.425 22:32:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.425 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.425 22:32:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.955 22:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.955 00:08:48.955 real 0m9.250s 00:08:48.955 user 0m7.088s 00:08:48.955 sys 0m4.234s 00:08:48.955 22:32:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:48.955 22:32:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.955 ************************************ 00:08:48.955 END TEST nvmf_fused_ordering 00:08:48.955 ************************************ 00:08:48.955 22:32:31 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:48.955 22:32:31 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:48.955 22:32:31 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:48.955 22:32:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:48.955 22:32:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.955 ************************************ 00:08:48.955 START TEST nvmf_delete_subsystem 00:08:48.955 ************************************ 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:48.955 * Looking for test storage... 00:08:48.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.955 22:32:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:50.862 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:50.862 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:50.862 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:50.862 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.862 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.863 22:32:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:50.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:08:50.863 00:08:50.863 --- 10.0.0.2 ping statistics --- 00:08:50.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.863 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:50.863 00:08:50.863 --- 10.0.0.1 ping statistics --- 00:08:50.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.863 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1192089 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1192089 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@823 -- # '[' -z 1192089 ']' 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:50.863 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.863 [2024-07-15 22:32:34.189905] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:08:50.863 [2024-07-15 22:32:34.189988] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.863 [2024-07-15 22:32:34.256158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.122 [2024-07-15 22:32:34.372079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.122 [2024-07-15 22:32:34.372133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.122 [2024-07-15 22:32:34.372149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.122 [2024-07-15 22:32:34.372163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.122 [2024-07-15 22:32:34.372179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.122 [2024-07-15 22:32:34.372259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.122 [2024-07-15 22:32:34.372267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # return 0 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.122 [2024-07-15 22:32:34.521736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.122 [2024-07-15 22:32:34.538044] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.122 NULL1 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.122 Delay0 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1192119 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:51.122 22:32:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:51.122 [2024-07-15 22:32:34.612774] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:53.658 22:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.658 22:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:53.658 22:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 [2024-07-15 22:32:36.783705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f90b000cfe0 is same with the state(5) to be set 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 Write completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 starting I/O failed: -6 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.658 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 starting I/O failed: -6 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 starting I/O failed: -6 00:08:53.659 [2024-07-15 22:32:36.785648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d955c0 is same with the state(5) to be set 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 [2024-07-15 22:32:36.786005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f90b0000c00 is same with the state(5) to be set 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 Write completed with error (sct=0, sc=8) 00:08:53.659 Read completed with error (sct=0, sc=8) 00:08:53.659 [2024-07-15 22:32:36.786292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95980 is same with the state(5) to be set 00:08:54.594 [2024-07-15 22:32:37.752930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96ac0 is same with the state(5) to be set 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 [2024-07-15 22:32:37.787567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f90b000d2f0 is same with the state(5) to be set 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Write completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 [2024-07-15 22:32:37.787909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f90b000d600 is same with the state(5) to be set 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.594 Read completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Write completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Write completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 [2024-07-15 22:32:37.789673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d953e0 is same with the state(5) to be set 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Write completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Write completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Write completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Write completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Read completed with error (sct=0, sc=8) 00:08:54.595 Write completed with error (sct=0, sc=8) 00:08:54.595 [2024-07-15 22:32:37.789866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d957a0 is same with the state(5) to be set 00:08:54.595 Initializing NVMe Controllers 00:08:54.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:54.595 Controller IO queue size 128, less than required. 00:08:54.595 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:54.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:54.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:54.595 Initialization complete. Launching workers. 00:08:54.595 ======================================================== 00:08:54.595 Latency(us) 00:08:54.595 Device Information : IOPS MiB/s Average min max 00:08:54.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 147.32 0.07 955630.10 666.00 1047183.20 00:08:54.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.18 0.08 906915.72 1375.70 1014002.44 00:08:54.595 ======================================================== 00:08:54.595 Total : 312.50 0.15 929881.07 666.00 1047183.20 00:08:54.595 00:08:54.595 [2024-07-15 22:32:37.790771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d96ac0 (9): Bad file descriptor 00:08:54.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:54.595 22:32:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:54.595 22:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:54.595 22:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1192119 00:08:54.595 22:32:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1192119 00:08:54.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1192119) - No such process 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1192119 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # local es=0 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # valid_exec_arg wait 1192119 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@630 -- # local arg=wait 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # type -t wait 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # wait 1192119 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # es=1 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.852 [2024-07-15 22:32:38.312454] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1192636 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1192636 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.852 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:55.109 [2024-07-15 22:32:38.374128] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:55.366 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.366 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1192636 00:08:55.366 22:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.934 22:32:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.934 22:32:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1192636 00:08:55.934 22:32:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.499 22:32:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.499 22:32:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1192636 00:08:56.499 22:32:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:57.099 22:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.099 22:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1192636 00:08:57.099 22:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:57.358 22:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.358 22:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1192636 00:08:57.358 22:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:57.926 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.926 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1192636 00:08:57.926 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.185 Initializing NVMe Controllers 00:08:58.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:58.185 Controller IO queue size 128, less than required. 00:08:58.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:58.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:58.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:58.185 Initialization complete. Launching workers. 00:08:58.185 ======================================================== 00:08:58.185 Latency(us) 00:08:58.185 Device Information : IOPS MiB/s Average min max 00:08:58.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003854.23 1000284.71 1042622.38 00:08:58.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005491.06 1000214.48 1041600.84 00:08:58.185 ======================================================== 00:08:58.185 Total : 256.00 0.12 1004672.65 1000214.48 1042622.38 00:08:58.185 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1192636 00:08:58.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1192636) - No such process 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1192636 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.443 rmmod nvme_tcp 00:08:58.443 rmmod nvme_fabrics 00:08:58.443 rmmod nvme_keyring 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1192089 ']' 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1192089 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@942 -- # '[' -z 1192089 ']' 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # kill -0 1192089 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # uname 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1192089 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1192089' 00:08:58.443 killing process with pid 1192089 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@961 -- # kill 1192089 00:08:58.443 22:32:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # wait 1192089 00:08:59.010 22:32:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.010 22:32:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.010 22:32:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.010 22:32:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.010 22:32:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.010 22:32:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.010 22:32:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.010 22:32:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.911 22:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.911 00:09:00.911 real 0m12.263s 00:09:00.911 user 0m27.864s 00:09:00.911 sys 0m3.027s 00:09:00.911 22:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:00.911 22:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.911 ************************************ 00:09:00.911 END TEST nvmf_delete_subsystem 00:09:00.911 ************************************ 00:09:00.911 22:32:44 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:00.911 22:32:44 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:00.911 22:32:44 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:00.911 22:32:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:00.911 22:32:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.911 ************************************ 00:09:00.911 START TEST nvmf_ns_masking 00:09:00.911 ************************************ 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1117 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:00.912 * Looking for test storage... 00:09:00.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4d69afbf-7a63-4761-a8aa-f848918ced73 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=93e60809-48ad-4ec7-8f4e-a36ecd4179cb 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=95efb329-a056-45fe-8e3f-f03a9ae357c0 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.912 22:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:03.440 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:03.440 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:03.440 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:03.440 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:03.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:09:03.440 00:09:03.440 --- 10.0.0.2 ping statistics --- 00:09:03.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.440 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:09:03.440 00:09:03.440 --- 10.0.0.1 ping statistics --- 00:09:03.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.440 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1194988 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1194988 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@823 -- # '[' -z 1194988 ']' 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:03.440 [2024-07-15 22:32:46.637178] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:09:03.440 [2024-07-15 22:32:46.637276] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.440 [2024-07-15 22:32:46.699670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.440 [2024-07-15 22:32:46.806637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.440 [2024-07-15 22:32:46.806695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.440 [2024-07-15 22:32:46.806720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.440 [2024-07-15 22:32:46.806733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.440 [2024-07-15 22:32:46.806745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.440 [2024-07-15 22:32:46.806781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # return 0 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.440 22:32:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:03.698 22:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.698 22:32:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:03.954 [2024-07-15 22:32:47.232693] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.954 22:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:03.954 22:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:03.954 22:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:04.212 Malloc1 00:09:04.212 22:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:04.469 Malloc2 00:09:04.469 22:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.726 22:32:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:04.984 22:32:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.243 [2024-07-15 22:32:48.545598] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.243 22:32:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:05.243 22:32:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 95efb329-a056-45fe-8e3f-f03a9ae357c0 -a 10.0.0.2 -s 4420 -i 4 00:09:05.243 22:32:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.243 22:32:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:09:05.243 22:32:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.243 22:32:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:05.243 22:32:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:07.806 [ 0]:0x1 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=477da2d237bc4019b6acd6100ef23068 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 477da2d237bc4019b6acd6100ef23068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:07.806 22:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:07.806 [ 0]:0x1 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=477da2d237bc4019b6acd6100ef23068 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 477da2d237bc4019b6acd6100ef23068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:07.806 [ 1]:0x2 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=619dc33148734be7aefdeae136d987f9 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 619dc33148734be7aefdeae136d987f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.806 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.065 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:08.322 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:08.322 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 95efb329-a056-45fe-8e3f-f03a9ae357c0 -a 10.0.0.2 -s 4420 -i 4 00:09:08.581 22:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:08.581 22:32:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:09:08.581 22:32:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.581 22:32:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n 1 ]] 00:09:08.581 22:32:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # nvme_device_counter=1 00:09:08.581 22:32:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:09:10.488 22:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:10.488 22:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:10.488 22:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.488 22:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:10.488 22:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.488 22:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:09:10.488 22:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:10.488 22:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:10.746 [ 0]:0x2 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:10.746 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.004 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=619dc33148734be7aefdeae136d987f9 00:09:11.004 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 619dc33148734be7aefdeae136d987f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.004 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:11.263 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:11.263 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.263 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:11.263 [ 0]:0x1 00:09:11.263 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.263 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:11.263 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=477da2d237bc4019b6acd6100ef23068 00:09:11.263 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 477da2d237bc4019b6acd6100ef23068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.264 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:11.264 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.264 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:11.264 [ 1]:0x2 00:09:11.264 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:11.264 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.264 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=619dc33148734be7aefdeae136d987f9 00:09:11.264 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 619dc33148734be7aefdeae136d987f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.264 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:11.523 [ 0]:0x2 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:11.523 22:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.523 22:32:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=619dc33148734be7aefdeae136d987f9 00:09:11.523 22:32:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 619dc33148734be7aefdeae136d987f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.524 22:32:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:11.524 22:32:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.782 22:32:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:12.040 22:32:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:12.040 22:32:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 95efb329-a056-45fe-8e3f-f03a9ae357c0 -a 10.0.0.2 -s 4420 -i 4 00:09:12.040 22:32:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:12.040 22:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:09:12.040 22:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.040 22:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n 2 ]] 00:09:12.040 22:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # nvme_device_counter=2 00:09:12.040 22:32:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=2 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:14.622 [ 0]:0x1 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=477da2d237bc4019b6acd6100ef23068 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 477da2d237bc4019b6acd6100ef23068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:14.622 [ 1]:0x2 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=619dc33148734be7aefdeae136d987f9 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 619dc33148734be7aefdeae136d987f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.622 22:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.622 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:14.622 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.622 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:09:14.622 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:14.622 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:14.622 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:14.622 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:14.622 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.622 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:14.622 [ 0]:0x2 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=619dc33148734be7aefdeae136d987f9 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 619dc33148734be7aefdeae136d987f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:14.623 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:14.881 [2024-07-15 22:32:58.287252] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:14.881 request: 00:09:14.881 { 00:09:14.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.881 "nsid": 2, 00:09:14.881 "host": "nqn.2016-06.io.spdk:host1", 00:09:14.881 "method": "nvmf_ns_remove_host", 00:09:14.881 "req_id": 1 00:09:14.881 } 00:09:14.881 Got JSON-RPC error response 00:09:14.881 response: 00:09:14.881 { 00:09:14.881 "code": -32602, 00:09:14.881 "message": "Invalid parameters" 00:09:14.881 } 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:09:14.881 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:14.882 [ 0]:0x2 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.882 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=619dc33148734be7aefdeae136d987f9 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 619dc33148734be7aefdeae136d987f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1196489 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1196489 /var/tmp/host.sock 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@823 -- # '[' -z 1196489 ']' 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/host.sock 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:15.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:15.140 22:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:15.140 [2024-07-15 22:32:58.504269] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:09:15.140 [2024-07-15 22:32:58.504356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196489 ] 00:09:15.140 [2024-07-15 22:32:58.569473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.399 [2024-07-15 22:32:58.695139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.335 22:32:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:16.335 22:32:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # return 0 00:09:16.335 22:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.335 22:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.593 22:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4d69afbf-7a63-4761-a8aa-f848918ced73 00:09:16.593 22:33:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:16.593 22:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4D69AFBF7A634761A8AAF848918CED73 -i 00:09:16.851 22:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 93e60809-48ad-4ec7-8f4e-a36ecd4179cb 00:09:16.851 22:33:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:16.851 22:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 93E6080948AD4EC78F4EA36ECD4179CB -i 00:09:17.109 22:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:17.366 22:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:17.624 22:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:17.624 22:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:17.883 nvme0n1 00:09:18.142 22:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:18.142 22:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:18.401 nvme1n2 00:09:18.401 22:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:18.401 22:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:18.401 22:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:18.401 22:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:18.401 22:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:18.659 22:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:18.659 22:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:18.659 22:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:18.659 22:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:18.917 22:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4d69afbf-7a63-4761-a8aa-f848918ced73 == \4\d\6\9\a\f\b\f\-\7\a\6\3\-\4\7\6\1\-\a\8\a\a\-\f\8\4\8\9\1\8\c\e\d\7\3 ]] 00:09:18.917 22:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:18.917 22:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:18.917 22:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 93e60809-48ad-4ec7-8f4e-a36ecd4179cb == \9\3\e\6\0\8\0\9\-\4\8\a\d\-\4\e\c\7\-\8\f\4\e\-\a\3\6\e\c\d\4\1\7\9\c\b ]] 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1196489 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@942 -- # '[' -z 1196489 ']' 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # kill -0 1196489 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # uname 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1196489 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1196489' 00:09:19.176 killing process with pid 1196489 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@961 -- # kill 1196489 00:09:19.176 22:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # wait 1196489 00:09:19.745 22:33:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:20.005 rmmod nvme_tcp 00:09:20.005 rmmod nvme_fabrics 00:09:20.005 rmmod nvme_keyring 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1194988 ']' 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1194988 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@942 -- # '[' -z 1194988 ']' 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # kill -0 1194988 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # uname 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1194988 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1194988' 00:09:20.005 killing process with pid 1194988 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@961 -- # kill 1194988 00:09:20.005 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # wait 1194988 00:09:20.573 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.573 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.573 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.573 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.573 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.573 22:33:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.573 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.573 22:33:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.482 22:33:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.482 00:09:22.482 real 0m21.501s 00:09:22.482 user 0m28.602s 00:09:22.482 sys 0m4.124s 00:09:22.482 22:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:22.482 22:33:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:22.482 ************************************ 00:09:22.482 END TEST nvmf_ns_masking 00:09:22.482 ************************************ 00:09:22.482 22:33:05 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:22.482 22:33:05 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:22.482 22:33:05 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:22.482 22:33:05 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:22.482 22:33:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:22.482 22:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.482 ************************************ 00:09:22.482 START TEST nvmf_nvme_cli 00:09:22.482 ************************************ 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:22.482 * Looking for test storage... 00:09:22.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.482 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.483 22:33:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.380 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:24.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:24.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:24.381 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:24.381 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.381 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.639 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.639 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.639 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.639 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.639 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.639 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.639 22:33:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:09:24.639 00:09:24.639 --- 10.0.0.2 ping statistics --- 00:09:24.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.639 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:09:24.639 00:09:24.639 --- 10.0.0.1 ping statistics --- 00:09:24.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.639 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1199537 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1199537 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@823 -- # '[' -z 1199537 ']' 00:09:24.639 22:33:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.640 22:33:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:24.640 22:33:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.640 22:33:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:24.640 22:33:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:24.640 [2024-07-15 22:33:08.111514] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:09:24.640 [2024-07-15 22:33:08.111599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.899 [2024-07-15 22:33:08.186273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.899 [2024-07-15 22:33:08.308910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.899 [2024-07-15 22:33:08.308973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.899 [2024-07-15 22:33:08.308990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.899 [2024-07-15 22:33:08.309003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.899 [2024-07-15 22:33:08.309015] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.899 [2024-07-15 22:33:08.309103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.899 [2024-07-15 22:33:08.309171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.899 [2024-07-15 22:33:08.309223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.899 [2024-07-15 22:33:08.309227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # return 0 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:25.861 [2024-07-15 22:33:09.099015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:25.861 Malloc0 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:25.861 Malloc1 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:25.861 [2024-07-15 22:33:09.180045] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:25.861 00:09:25.861 Discovery Log Number of Records 2, Generation counter 2 00:09:25.861 =====Discovery Log Entry 0====== 00:09:25.861 trtype: tcp 00:09:25.861 adrfam: ipv4 00:09:25.861 subtype: current discovery subsystem 00:09:25.861 treq: not required 00:09:25.861 portid: 0 00:09:25.861 trsvcid: 4420 00:09:25.861 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:25.861 traddr: 10.0.0.2 00:09:25.861 eflags: explicit discovery connections, duplicate discovery information 00:09:25.861 sectype: none 00:09:25.861 =====Discovery Log Entry 1====== 00:09:25.861 trtype: tcp 00:09:25.861 adrfam: ipv4 00:09:25.861 subtype: nvme subsystem 00:09:25.861 treq: not required 00:09:25.861 portid: 0 00:09:25.861 trsvcid: 4420 00:09:25.861 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:25.861 traddr: 10.0.0.2 00:09:25.861 eflags: none 00:09:25.861 sectype: none 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:25.861 22:33:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.797 22:33:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:26.797 22:33:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1192 -- # local i=0 00:09:26.797 22:33:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.797 22:33:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # [[ -n 2 ]] 00:09:26.797 22:33:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # nvme_device_counter=2 00:09:26.797 22:33:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # sleep 2 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_devices=2 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # return 0 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:28.728 /dev/nvme0n1 ]] 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.728 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1213 -- # local i=0 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # return 0 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.729 rmmod nvme_tcp 00:09:28.729 rmmod nvme_fabrics 00:09:28.729 rmmod nvme_keyring 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1199537 ']' 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1199537 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@942 -- # '[' -z 1199537 ']' 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # kill -0 1199537 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # uname 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:28.729 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1199537 00:09:28.995 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:09:28.996 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:09:28.996 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1199537' 00:09:28.996 killing process with pid 1199537 00:09:28.996 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@961 -- # kill 1199537 00:09:28.996 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # wait 1199537 00:09:29.294 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:29.294 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:29.294 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:29.294 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:29.294 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:29.294 22:33:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.294 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.294 22:33:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.200 22:33:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:31.200 00:09:31.200 real 0m8.778s 00:09:31.200 user 0m17.602s 00:09:31.200 sys 0m2.176s 00:09:31.200 22:33:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:31.200 22:33:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.200 ************************************ 00:09:31.200 END TEST nvmf_nvme_cli 00:09:31.200 ************************************ 00:09:31.200 22:33:14 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:31.200 22:33:14 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:31.200 22:33:14 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:31.200 22:33:14 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:31.200 22:33:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:31.200 22:33:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:31.200 ************************************ 00:09:31.200 START TEST nvmf_vfio_user 00:09:31.200 ************************************ 00:09:31.200 22:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:31.459 * Looking for test storage... 00:09:31.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1200658 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1200658' 00:09:31.459 Process pid: 1200658 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1200658 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@823 -- # '[' -z 1200658 ']' 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:31.459 22:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:31.459 [2024-07-15 22:33:14.797837] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:09:31.459 [2024-07-15 22:33:14.797947] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.459 [2024-07-15 22:33:14.857502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.719 [2024-07-15 22:33:14.964670] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.719 [2024-07-15 22:33:14.964730] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.719 [2024-07-15 22:33:14.964746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.719 [2024-07-15 22:33:14.964759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.719 [2024-07-15 22:33:14.964770] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.719 [2024-07-15 22:33:14.964853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.719 [2024-07-15 22:33:14.964906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.719 [2024-07-15 22:33:14.964952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.719 [2024-07-15 22:33:14.964958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.719 22:33:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:31.719 22:33:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # return 0 00:09:31.719 22:33:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:32.654 22:33:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:32.913 22:33:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:32.913 22:33:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:32.913 22:33:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:32.913 22:33:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:32.913 22:33:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:33.170 Malloc1 00:09:33.429 22:33:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:33.686 22:33:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:33.686 22:33:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:33.943 22:33:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:33.944 22:33:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:33.944 22:33:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:34.201 Malloc2 00:09:34.201 22:33:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:34.459 22:33:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:34.716 22:33:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:34.973 22:33:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:34.973 22:33:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:34.973 22:33:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:34.973 22:33:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:34.973 22:33:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:34.973 22:33:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:34.973 [2024-07-15 22:33:18.452213] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:09:34.973 [2024-07-15 22:33:18.452264] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201082 ] 00:09:35.235 [2024-07-15 22:33:18.486276] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:35.235 [2024-07-15 22:33:18.494338] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:35.235 [2024-07-15 22:33:18.494365] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8bfe240000 00:09:35.235 [2024-07-15 22:33:18.495337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:35.235 [2024-07-15 22:33:18.496324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:35.235 [2024-07-15 22:33:18.497331] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:35.235 [2024-07-15 22:33:18.498335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:35.235 [2024-07-15 22:33:18.499346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:35.235 [2024-07-15 22:33:18.500348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:35.235 [2024-07-15 22:33:18.501354] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:35.235 [2024-07-15 22:33:18.502362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:35.235 [2024-07-15 22:33:18.503368] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:35.235 [2024-07-15 22:33:18.503389] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8bfe235000 00:09:35.235 [2024-07-15 22:33:18.504503] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:35.235 [2024-07-15 22:33:18.520458] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:35.235 [2024-07-15 22:33:18.520495] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:35.235 [2024-07-15 22:33:18.525479] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:35.235 [2024-07-15 22:33:18.525538] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:35.235 [2024-07-15 22:33:18.525633] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:35.235 [2024-07-15 22:33:18.525665] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:35.235 [2024-07-15 22:33:18.525676] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:35.235 [2024-07-15 22:33:18.526474] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:35.235 [2024-07-15 22:33:18.526495] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:35.235 [2024-07-15 22:33:18.526508] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:35.235 [2024-07-15 22:33:18.527481] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:35.235 [2024-07-15 22:33:18.527500] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:35.235 [2024-07-15 22:33:18.527519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:35.235 [2024-07-15 22:33:18.528485] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:35.235 [2024-07-15 22:33:18.528503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:35.235 [2024-07-15 22:33:18.529490] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:35.235 [2024-07-15 22:33:18.529510] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:35.235 [2024-07-15 22:33:18.529519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:35.235 [2024-07-15 22:33:18.529530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:35.235 [2024-07-15 22:33:18.529639] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:35.235 [2024-07-15 22:33:18.529647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:35.235 [2024-07-15 22:33:18.529656] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:35.235 [2024-07-15 22:33:18.530494] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:35.235 [2024-07-15 22:33:18.531501] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:35.235 [2024-07-15 22:33:18.532507] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:35.235 [2024-07-15 22:33:18.533500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:35.235 [2024-07-15 22:33:18.533605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:35.235 [2024-07-15 22:33:18.534517] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:35.235 [2024-07-15 22:33:18.534536] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:35.235 [2024-07-15 22:33:18.534544] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:35.235 [2024-07-15 22:33:18.534568] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:35.235 [2024-07-15 22:33:18.534581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:35.235 [2024-07-15 22:33:18.534608] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:35.235 [2024-07-15 22:33:18.534618] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:35.235 [2024-07-15 22:33:18.534638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:35.235 [2024-07-15 22:33:18.534687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:35.235 [2024-07-15 22:33:18.534709] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:35.235 [2024-07-15 22:33:18.534721] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:35.235 [2024-07-15 22:33:18.534729] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:35.235 [2024-07-15 22:33:18.534737] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:35.235 [2024-07-15 22:33:18.534744] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:35.235 [2024-07-15 22:33:18.534752] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:35.235 [2024-07-15 22:33:18.534759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:35.235 [2024-07-15 22:33:18.534772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:35.235 [2024-07-15 22:33:18.534787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:35.235 [2024-07-15 22:33:18.534799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:35.235 [2024-07-15 22:33:18.534821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:35.235 [2024-07-15 22:33:18.534834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:35.235 [2024-07-15 22:33:18.534846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:35.235 [2024-07-15 22:33:18.534873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:35.235 [2024-07-15 22:33:18.534889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:35.235 [2024-07-15 22:33:18.534908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:35.235 [2024-07-15 22:33:18.534925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:35.235 [2024-07-15 22:33:18.534937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:35.235 [2024-07-15 22:33:18.534948] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:35.235 [2024-07-15 22:33:18.534957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:35.235 [2024-07-15 22:33:18.534968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:35.235 [2024-07-15 22:33:18.534979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.534992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535108] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:35.236 [2024-07-15 22:33:18.535116] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:35.236 [2024-07-15 22:33:18.535126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535175] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:35.236 [2024-07-15 22:33:18.535197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535212] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535225] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:35.236 [2024-07-15 22:33:18.535248] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:35.236 [2024-07-15 22:33:18.535258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535328] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:35.236 [2024-07-15 22:33:18.535336] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:35.236 [2024-07-15 22:33:18.535345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535372] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535433] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:35.236 [2024-07-15 22:33:18.535440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:35.236 [2024-07-15 22:33:18.535451] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:35.236 [2024-07-15 22:33:18.535478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535603] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:35.236 [2024-07-15 22:33:18.535613] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:35.236 [2024-07-15 22:33:18.535619] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:35.236 [2024-07-15 22:33:18.535625] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:35.236 [2024-07-15 22:33:18.535634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:35.236 [2024-07-15 22:33:18.535646] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:35.236 [2024-07-15 22:33:18.535654] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:35.236 [2024-07-15 22:33:18.535663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535673] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:35.236 [2024-07-15 22:33:18.535681] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:35.236 [2024-07-15 22:33:18.535690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535701] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:35.236 [2024-07-15 22:33:18.535709] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:35.236 [2024-07-15 22:33:18.535718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:35.236 [2024-07-15 22:33:18.535729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:35.236 [2024-07-15 22:33:18.535777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:35.236 ===================================================== 00:09:35.236 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:35.236 ===================================================== 00:09:35.236 Controller Capabilities/Features 00:09:35.236 ================================ 00:09:35.236 Vendor ID: 4e58 00:09:35.236 Subsystem Vendor ID: 4e58 00:09:35.236 Serial Number: SPDK1 00:09:35.236 Model Number: SPDK bdev Controller 00:09:35.236 Firmware Version: 24.09 00:09:35.236 Recommended Arb Burst: 6 00:09:35.236 IEEE OUI Identifier: 8d 6b 50 00:09:35.236 Multi-path I/O 00:09:35.236 May have multiple subsystem ports: Yes 00:09:35.236 May have multiple controllers: Yes 00:09:35.236 Associated with SR-IOV VF: No 00:09:35.236 Max Data Transfer Size: 131072 00:09:35.236 Max Number of Namespaces: 32 00:09:35.236 Max Number of I/O Queues: 127 00:09:35.236 NVMe Specification Version (VS): 1.3 00:09:35.236 NVMe Specification Version (Identify): 1.3 00:09:35.236 Maximum Queue Entries: 256 00:09:35.236 Contiguous Queues Required: Yes 00:09:35.236 Arbitration Mechanisms Supported 00:09:35.236 Weighted Round Robin: Not Supported 00:09:35.236 Vendor Specific: Not Supported 00:09:35.236 Reset Timeout: 15000 ms 00:09:35.236 Doorbell Stride: 4 bytes 00:09:35.236 NVM Subsystem Reset: Not Supported 00:09:35.236 Command Sets Supported 00:09:35.236 NVM Command Set: Supported 00:09:35.236 Boot Partition: Not Supported 00:09:35.236 Memory Page Size Minimum: 4096 bytes 00:09:35.236 Memory Page Size Maximum: 4096 bytes 00:09:35.236 Persistent Memory Region: Not Supported 00:09:35.236 Optional Asynchronous Events Supported 00:09:35.236 Namespace Attribute Notices: Supported 00:09:35.236 Firmware Activation Notices: Not Supported 00:09:35.236 ANA Change Notices: Not Supported 00:09:35.236 PLE Aggregate Log Change Notices: Not Supported 00:09:35.236 LBA Status Info Alert Notices: Not Supported 00:09:35.236 EGE Aggregate Log Change Notices: Not Supported 00:09:35.236 Normal NVM Subsystem Shutdown event: Not Supported 00:09:35.236 Zone Descriptor Change Notices: Not Supported 00:09:35.236 Discovery Log Change Notices: Not Supported 00:09:35.236 Controller Attributes 00:09:35.236 128-bit Host Identifier: Supported 00:09:35.236 Non-Operational Permissive Mode: Not Supported 00:09:35.236 NVM Sets: Not Supported 00:09:35.236 Read Recovery Levels: Not Supported 00:09:35.236 Endurance Groups: Not Supported 00:09:35.236 Predictable Latency Mode: Not Supported 00:09:35.236 Traffic Based Keep ALive: Not Supported 00:09:35.236 Namespace Granularity: Not Supported 00:09:35.236 SQ Associations: Not Supported 00:09:35.236 UUID List: Not Supported 00:09:35.236 Multi-Domain Subsystem: Not Supported 00:09:35.236 Fixed Capacity Management: Not Supported 00:09:35.236 Variable Capacity Management: Not Supported 00:09:35.236 Delete Endurance Group: Not Supported 00:09:35.236 Delete NVM Set: Not Supported 00:09:35.236 Extended LBA Formats Supported: Not Supported 00:09:35.236 Flexible Data Placement Supported: Not Supported 00:09:35.236 00:09:35.236 Controller Memory Buffer Support 00:09:35.236 ================================ 00:09:35.236 Supported: No 00:09:35.236 00:09:35.236 Persistent Memory Region Support 00:09:35.236 ================================ 00:09:35.236 Supported: No 00:09:35.236 00:09:35.236 Admin Command Set Attributes 00:09:35.236 ============================ 00:09:35.236 Security Send/Receive: Not Supported 00:09:35.236 Format NVM: Not Supported 00:09:35.237 Firmware Activate/Download: Not Supported 00:09:35.237 Namespace Management: Not Supported 00:09:35.237 Device Self-Test: Not Supported 00:09:35.237 Directives: Not Supported 00:09:35.237 NVMe-MI: Not Supported 00:09:35.237 Virtualization Management: Not Supported 00:09:35.237 Doorbell Buffer Config: Not Supported 00:09:35.237 Get LBA Status Capability: Not Supported 00:09:35.237 Command & Feature Lockdown Capability: Not Supported 00:09:35.237 Abort Command Limit: 4 00:09:35.237 Async Event Request Limit: 4 00:09:35.237 Number of Firmware Slots: N/A 00:09:35.237 Firmware Slot 1 Read-Only: N/A 00:09:35.237 Firmware Activation Without Reset: N/A 00:09:35.237 Multiple Update Detection Support: N/A 00:09:35.237 Firmware Update Granularity: No Information Provided 00:09:35.237 Per-Namespace SMART Log: No 00:09:35.237 Asymmetric Namespace Access Log Page: Not Supported 00:09:35.237 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:35.237 Command Effects Log Page: Supported 00:09:35.237 Get Log Page Extended Data: Supported 00:09:35.237 Telemetry Log Pages: Not Supported 00:09:35.237 Persistent Event Log Pages: Not Supported 00:09:35.237 Supported Log Pages Log Page: May Support 00:09:35.237 Commands Supported & Effects Log Page: Not Supported 00:09:35.237 Feature Identifiers & Effects Log Page:May Support 00:09:35.237 NVMe-MI Commands & Effects Log Page: May Support 00:09:35.237 Data Area 4 for Telemetry Log: Not Supported 00:09:35.237 Error Log Page Entries Supported: 128 00:09:35.237 Keep Alive: Supported 00:09:35.237 Keep Alive Granularity: 10000 ms 00:09:35.237 00:09:35.237 NVM Command Set Attributes 00:09:35.237 ========================== 00:09:35.237 Submission Queue Entry Size 00:09:35.237 Max: 64 00:09:35.237 Min: 64 00:09:35.237 Completion Queue Entry Size 00:09:35.237 Max: 16 00:09:35.237 Min: 16 00:09:35.237 Number of Namespaces: 32 00:09:35.237 Compare Command: Supported 00:09:35.237 Write Uncorrectable Command: Not Supported 00:09:35.237 Dataset Management Command: Supported 00:09:35.237 Write Zeroes Command: Supported 00:09:35.237 Set Features Save Field: Not Supported 00:09:35.237 Reservations: Not Supported 00:09:35.237 Timestamp: Not Supported 00:09:35.237 Copy: Supported 00:09:35.237 Volatile Write Cache: Present 00:09:35.237 Atomic Write Unit (Normal): 1 00:09:35.237 Atomic Write Unit (PFail): 1 00:09:35.237 Atomic Compare & Write Unit: 1 00:09:35.237 Fused Compare & Write: Supported 00:09:35.237 Scatter-Gather List 00:09:35.237 SGL Command Set: Supported (Dword aligned) 00:09:35.237 SGL Keyed: Not Supported 00:09:35.237 SGL Bit Bucket Descriptor: Not Supported 00:09:35.237 SGL Metadata Pointer: Not Supported 00:09:35.237 Oversized SGL: Not Supported 00:09:35.237 SGL Metadata Address: Not Supported 00:09:35.237 SGL Offset: Not Supported 00:09:35.237 Transport SGL Data Block: Not Supported 00:09:35.237 Replay Protected Memory Block: Not Supported 00:09:35.237 00:09:35.237 Firmware Slot Information 00:09:35.237 ========================= 00:09:35.237 Active slot: 1 00:09:35.237 Slot 1 Firmware Revision: 24.09 00:09:35.237 00:09:35.237 00:09:35.237 Commands Supported and Effects 00:09:35.237 ============================== 00:09:35.237 Admin Commands 00:09:35.237 -------------- 00:09:35.237 Get Log Page (02h): Supported 00:09:35.237 Identify (06h): Supported 00:09:35.237 Abort (08h): Supported 00:09:35.237 Set Features (09h): Supported 00:09:35.237 Get Features (0Ah): Supported 00:09:35.237 Asynchronous Event Request (0Ch): Supported 00:09:35.237 Keep Alive (18h): Supported 00:09:35.237 I/O Commands 00:09:35.237 ------------ 00:09:35.237 Flush (00h): Supported LBA-Change 00:09:35.237 Write (01h): Supported LBA-Change 00:09:35.237 Read (02h): Supported 00:09:35.237 Compare (05h): Supported 00:09:35.237 Write Zeroes (08h): Supported LBA-Change 00:09:35.237 Dataset Management (09h): Supported LBA-Change 00:09:35.237 Copy (19h): Supported LBA-Change 00:09:35.237 00:09:35.237 Error Log 00:09:35.237 ========= 00:09:35.237 00:09:35.237 Arbitration 00:09:35.237 =========== 00:09:35.237 Arbitration Burst: 1 00:09:35.237 00:09:35.237 Power Management 00:09:35.237 ================ 00:09:35.237 Number of Power States: 1 00:09:35.237 Current Power State: Power State #0 00:09:35.237 Power State #0: 00:09:35.237 Max Power: 0.00 W 00:09:35.237 Non-Operational State: Operational 00:09:35.237 Entry Latency: Not Reported 00:09:35.237 Exit Latency: Not Reported 00:09:35.237 Relative Read Throughput: 0 00:09:35.237 Relative Read Latency: 0 00:09:35.237 Relative Write Throughput: 0 00:09:35.237 Relative Write Latency: 0 00:09:35.237 Idle Power: Not Reported 00:09:35.237 Active Power: Not Reported 00:09:35.237 Non-Operational Permissive Mode: Not Supported 00:09:35.237 00:09:35.237 Health Information 00:09:35.237 ================== 00:09:35.237 Critical Warnings: 00:09:35.237 Available Spare Space: OK 00:09:35.237 Temperature: OK 00:09:35.237 Device Reliability: OK 00:09:35.237 Read Only: No 00:09:35.237 Volatile Memory Backup: OK 00:09:35.237 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:35.237 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:35.237 Available Spare: 0% 00:09:35.237 Available Sp[2024-07-15 22:33:18.535919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:35.237 [2024-07-15 22:33:18.535937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:35.237 [2024-07-15 22:33:18.535982] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:35.237 [2024-07-15 22:33:18.536001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.237 [2024-07-15 22:33:18.536012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.237 [2024-07-15 22:33:18.536023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.237 [2024-07-15 22:33:18.536033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.237 [2024-07-15 22:33:18.539888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:35.237 [2024-07-15 22:33:18.539911] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:35.237 [2024-07-15 22:33:18.540539] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:35.237 [2024-07-15 22:33:18.540624] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:35.237 [2024-07-15 22:33:18.540639] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:35.237 [2024-07-15 22:33:18.541551] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:35.237 [2024-07-15 22:33:18.541574] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:35.237 [2024-07-15 22:33:18.541630] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:35.237 [2024-07-15 22:33:18.543589] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:35.237 are Threshold: 0% 00:09:35.237 Life Percentage Used: 0% 00:09:35.237 Data Units Read: 0 00:09:35.237 Data Units Written: 0 00:09:35.237 Host Read Commands: 0 00:09:35.237 Host Write Commands: 0 00:09:35.237 Controller Busy Time: 0 minutes 00:09:35.237 Power Cycles: 0 00:09:35.237 Power On Hours: 0 hours 00:09:35.237 Unsafe Shutdowns: 0 00:09:35.237 Unrecoverable Media Errors: 0 00:09:35.237 Lifetime Error Log Entries: 0 00:09:35.237 Warning Temperature Time: 0 minutes 00:09:35.237 Critical Temperature Time: 0 minutes 00:09:35.237 00:09:35.237 Number of Queues 00:09:35.237 ================ 00:09:35.237 Number of I/O Submission Queues: 127 00:09:35.237 Number of I/O Completion Queues: 127 00:09:35.237 00:09:35.237 Active Namespaces 00:09:35.237 ================= 00:09:35.237 Namespace ID:1 00:09:35.237 Error Recovery Timeout: Unlimited 00:09:35.237 Command Set Identifier: NVM (00h) 00:09:35.237 Deallocate: Supported 00:09:35.237 Deallocated/Unwritten Error: Not Supported 00:09:35.237 Deallocated Read Value: Unknown 00:09:35.237 Deallocate in Write Zeroes: Not Supported 00:09:35.237 Deallocated Guard Field: 0xFFFF 00:09:35.237 Flush: Supported 00:09:35.237 Reservation: Supported 00:09:35.237 Namespace Sharing Capabilities: Multiple Controllers 00:09:35.237 Size (in LBAs): 131072 (0GiB) 00:09:35.237 Capacity (in LBAs): 131072 (0GiB) 00:09:35.237 Utilization (in LBAs): 131072 (0GiB) 00:09:35.237 NGUID: 334D449804354C93A68C352D94E23FB9 00:09:35.237 UUID: 334d4498-0435-4c93-a68c-352d94e23fb9 00:09:35.237 Thin Provisioning: Not Supported 00:09:35.237 Per-NS Atomic Units: Yes 00:09:35.237 Atomic Boundary Size (Normal): 0 00:09:35.237 Atomic Boundary Size (PFail): 0 00:09:35.237 Atomic Boundary Offset: 0 00:09:35.237 Maximum Single Source Range Length: 65535 00:09:35.237 Maximum Copy Length: 65535 00:09:35.237 Maximum Source Range Count: 1 00:09:35.237 NGUID/EUI64 Never Reused: No 00:09:35.237 Namespace Write Protected: No 00:09:35.237 Number of LBA Formats: 1 00:09:35.237 Current LBA Format: LBA Format #00 00:09:35.237 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:35.237 00:09:35.237 22:33:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:35.495 [2024-07-15 22:33:18.765674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:40.770 Initializing NVMe Controllers 00:09:40.770 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:40.770 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:40.770 Initialization complete. Launching workers. 00:09:40.770 ======================================================== 00:09:40.770 Latency(us) 00:09:40.770 Device Information : IOPS MiB/s Average min max 00:09:40.770 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34715.97 135.61 3687.70 1177.87 8575.78 00:09:40.770 ======================================================== 00:09:40.770 Total : 34715.97 135.61 3687.70 1177.87 8575.78 00:09:40.770 00:09:40.770 [2024-07-15 22:33:23.789221] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:40.770 22:33:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:40.770 [2024-07-15 22:33:24.019360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:46.042 Initializing NVMe Controllers 00:09:46.042 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:46.042 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:46.042 Initialization complete. Launching workers. 00:09:46.042 ======================================================== 00:09:46.042 Latency(us) 00:09:46.042 Device Information : IOPS MiB/s Average min max 00:09:46.042 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15996.40 62.49 8021.16 4985.26 15962.83 00:09:46.042 ======================================================== 00:09:46.042 Total : 15996.40 62.49 8021.16 4985.26 15962.83 00:09:46.042 00:09:46.042 [2024-07-15 22:33:29.061822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:46.042 22:33:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:46.042 [2024-07-15 22:33:29.272844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:51.352 [2024-07-15 22:33:34.342233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:51.352 Initializing NVMe Controllers 00:09:51.353 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:51.353 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:51.353 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:51.353 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:51.353 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:51.353 Initialization complete. Launching workers. 00:09:51.353 Starting thread on core 2 00:09:51.353 Starting thread on core 3 00:09:51.353 Starting thread on core 1 00:09:51.353 22:33:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:51.353 [2024-07-15 22:33:34.645293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:54.642 [2024-07-15 22:33:37.711292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:54.642 Initializing NVMe Controllers 00:09:54.642 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:54.642 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:54.642 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:54.642 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:54.642 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:54.643 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:54.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:54.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:54.643 Initialization complete. Launching workers. 00:09:54.643 Starting thread on core 1 with urgent priority queue 00:09:54.643 Starting thread on core 2 with urgent priority queue 00:09:54.643 Starting thread on core 3 with urgent priority queue 00:09:54.643 Starting thread on core 0 with urgent priority queue 00:09:54.643 SPDK bdev Controller (SPDK1 ) core 0: 5676.00 IO/s 17.62 secs/100000 ios 00:09:54.643 SPDK bdev Controller (SPDK1 ) core 1: 4813.33 IO/s 20.78 secs/100000 ios 00:09:54.643 SPDK bdev Controller (SPDK1 ) core 2: 5953.67 IO/s 16.80 secs/100000 ios 00:09:54.643 SPDK bdev Controller (SPDK1 ) core 3: 6383.33 IO/s 15.67 secs/100000 ios 00:09:54.643 ======================================================== 00:09:54.643 00:09:54.643 22:33:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:54.643 [2024-07-15 22:33:38.015445] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:54.643 Initializing NVMe Controllers 00:09:54.643 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:54.643 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:54.643 Namespace ID: 1 size: 0GB 00:09:54.643 Initialization complete. 00:09:54.643 INFO: using host memory buffer for IO 00:09:54.643 Hello world! 00:09:54.643 [2024-07-15 22:33:38.049089] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:54.643 22:33:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:54.902 [2024-07-15 22:33:38.349328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:56.279 Initializing NVMe Controllers 00:09:56.279 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:56.279 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:56.279 Initialization complete. Launching workers. 00:09:56.279 submit (in ns) avg, min, max = 8353.8, 3495.6, 4014626.7 00:09:56.279 complete (in ns) avg, min, max = 22915.0, 2058.9, 4997831.1 00:09:56.279 00:09:56.279 Submit histogram 00:09:56.279 ================ 00:09:56.279 Range in us Cumulative Count 00:09:56.279 3.484 - 3.508: 0.0663% ( 9) 00:09:56.279 3.508 - 3.532: 0.5010% ( 59) 00:09:56.279 3.532 - 3.556: 1.7831% ( 174) 00:09:56.279 3.556 - 3.579: 4.7966% ( 409) 00:09:56.279 3.579 - 3.603: 10.8459% ( 821) 00:09:56.279 3.603 - 3.627: 19.1055% ( 1121) 00:09:56.279 3.627 - 3.650: 29.0156% ( 1345) 00:09:56.279 3.650 - 3.674: 38.5279% ( 1291) 00:09:56.279 3.674 - 3.698: 46.1612% ( 1036) 00:09:56.279 3.698 - 3.721: 53.4556% ( 990) 00:09:56.279 3.721 - 3.745: 59.0407% ( 758) 00:09:56.279 3.745 - 3.769: 63.8373% ( 651) 00:09:56.279 3.769 - 3.793: 67.4330% ( 488) 00:09:56.279 3.793 - 3.816: 70.9623% ( 479) 00:09:56.279 3.816 - 3.840: 73.9832% ( 410) 00:09:56.279 3.840 - 3.864: 77.6452% ( 497) 00:09:56.279 3.864 - 3.887: 81.0492% ( 462) 00:09:56.279 3.887 - 3.911: 83.9080% ( 388) 00:09:56.279 3.911 - 3.935: 86.5458% ( 358) 00:09:56.279 3.935 - 3.959: 88.6163% ( 281) 00:09:56.279 3.959 - 3.982: 90.3183% ( 231) 00:09:56.279 3.982 - 4.006: 92.0130% ( 230) 00:09:56.279 4.006 - 4.030: 93.1403% ( 153) 00:09:56.279 4.030 - 4.053: 93.9434% ( 109) 00:09:56.279 4.053 - 4.077: 94.6360% ( 94) 00:09:56.279 4.077 - 4.101: 95.1886% ( 75) 00:09:56.279 4.101 - 4.124: 95.7118% ( 71) 00:09:56.279 4.124 - 4.148: 96.0875% ( 51) 00:09:56.279 4.148 - 4.172: 96.3823% ( 40) 00:09:56.279 4.172 - 4.196: 96.5517% ( 23) 00:09:56.279 4.196 - 4.219: 96.6328% ( 11) 00:09:56.279 4.219 - 4.243: 96.7433% ( 15) 00:09:56.279 4.243 - 4.267: 96.8391% ( 13) 00:09:56.279 4.267 - 4.290: 96.9422% ( 14) 00:09:56.279 4.290 - 4.314: 97.0159% ( 10) 00:09:56.279 4.314 - 4.338: 97.1043% ( 12) 00:09:56.279 4.338 - 4.361: 97.2001% ( 13) 00:09:56.279 4.361 - 4.385: 97.2443% ( 6) 00:09:56.279 4.385 - 4.409: 97.3033% ( 8) 00:09:56.279 4.409 - 4.433: 97.3254% ( 3) 00:09:56.279 4.433 - 4.456: 97.3401% ( 2) 00:09:56.279 4.456 - 4.480: 97.3548% ( 2) 00:09:56.279 4.480 - 4.504: 97.3696% ( 2) 00:09:56.279 4.504 - 4.527: 97.3991% ( 4) 00:09:56.279 4.575 - 4.599: 97.4064% ( 1) 00:09:56.279 4.622 - 4.646: 97.4138% ( 1) 00:09:56.280 4.646 - 4.670: 97.4212% ( 1) 00:09:56.280 4.670 - 4.693: 97.4875% ( 9) 00:09:56.280 4.693 - 4.717: 97.5317% ( 6) 00:09:56.280 4.717 - 4.741: 97.5612% ( 4) 00:09:56.280 4.741 - 4.764: 97.6054% ( 6) 00:09:56.280 4.764 - 4.788: 97.6348% ( 4) 00:09:56.280 4.788 - 4.812: 97.6643% ( 4) 00:09:56.280 4.812 - 4.836: 97.7380% ( 10) 00:09:56.280 4.836 - 4.859: 97.8043% ( 9) 00:09:56.280 4.859 - 4.883: 97.8411% ( 5) 00:09:56.280 4.883 - 4.907: 97.8485% ( 1) 00:09:56.280 4.907 - 4.930: 97.8559% ( 1) 00:09:56.280 4.930 - 4.954: 97.9001% ( 6) 00:09:56.280 4.954 - 4.978: 97.9443% ( 6) 00:09:56.280 4.978 - 5.001: 97.9590% ( 2) 00:09:56.280 5.001 - 5.025: 97.9885% ( 4) 00:09:56.280 5.025 - 5.049: 97.9959% ( 1) 00:09:56.280 5.049 - 5.073: 98.0253% ( 4) 00:09:56.280 5.073 - 5.096: 98.0548% ( 4) 00:09:56.280 5.096 - 5.120: 98.0622% ( 1) 00:09:56.280 5.120 - 5.144: 98.0843% ( 3) 00:09:56.280 5.167 - 5.191: 98.0917% ( 1) 00:09:56.280 5.191 - 5.215: 98.1138% ( 3) 00:09:56.280 5.239 - 5.262: 98.1285% ( 2) 00:09:56.280 5.262 - 5.286: 98.1506% ( 3) 00:09:56.280 5.310 - 5.333: 98.1580% ( 1) 00:09:56.280 5.428 - 5.452: 98.1653% ( 1) 00:09:56.280 5.499 - 5.523: 98.1801% ( 2) 00:09:56.280 5.594 - 5.618: 98.1948% ( 2) 00:09:56.280 5.665 - 5.689: 98.2022% ( 1) 00:09:56.280 5.689 - 5.713: 98.2095% ( 1) 00:09:56.280 5.807 - 5.831: 98.2169% ( 1) 00:09:56.280 5.831 - 5.855: 98.2317% ( 2) 00:09:56.280 5.855 - 5.879: 98.2390% ( 1) 00:09:56.280 5.926 - 5.950: 98.2464% ( 1) 00:09:56.280 6.068 - 6.116: 98.2611% ( 2) 00:09:56.280 6.163 - 6.210: 98.2685% ( 1) 00:09:56.280 6.258 - 6.305: 98.2759% ( 1) 00:09:56.280 6.305 - 6.353: 98.2832% ( 1) 00:09:56.280 6.495 - 6.542: 98.2906% ( 1) 00:09:56.280 6.542 - 6.590: 98.2980% ( 1) 00:09:56.280 6.590 - 6.637: 98.3053% ( 1) 00:09:56.280 6.827 - 6.874: 98.3201% ( 2) 00:09:56.280 6.874 - 6.921: 98.3422% ( 3) 00:09:56.280 6.921 - 6.969: 98.3495% ( 1) 00:09:56.280 7.016 - 7.064: 98.3569% ( 1) 00:09:56.280 7.064 - 7.111: 98.3643% ( 1) 00:09:56.280 7.111 - 7.159: 98.3716% ( 1) 00:09:56.280 7.159 - 7.206: 98.3864% ( 2) 00:09:56.280 7.301 - 7.348: 98.3938% ( 1) 00:09:56.280 7.348 - 7.396: 98.4159% ( 3) 00:09:56.280 7.443 - 7.490: 98.4232% ( 1) 00:09:56.280 7.490 - 7.538: 98.4306% ( 1) 00:09:56.280 7.585 - 7.633: 98.4380% ( 1) 00:09:56.280 7.633 - 7.680: 98.4748% ( 5) 00:09:56.280 7.727 - 7.775: 98.4822% ( 1) 00:09:56.280 7.870 - 7.917: 98.4895% ( 1) 00:09:56.280 7.917 - 7.964: 98.4969% ( 1) 00:09:56.280 8.012 - 8.059: 98.5043% ( 1) 00:09:56.280 8.154 - 8.201: 98.5190% ( 2) 00:09:56.280 8.201 - 8.249: 98.5337% ( 2) 00:09:56.280 8.249 - 8.296: 98.5411% ( 1) 00:09:56.280 8.296 - 8.344: 98.5632% ( 3) 00:09:56.280 8.439 - 8.486: 98.5706% ( 1) 00:09:56.280 8.486 - 8.533: 98.5780% ( 1) 00:09:56.280 8.533 - 8.581: 98.5853% ( 1) 00:09:56.280 8.723 - 8.770: 98.6001% ( 2) 00:09:56.280 8.770 - 8.818: 98.6074% ( 1) 00:09:56.280 8.865 - 8.913: 98.6148% ( 1) 00:09:56.280 8.913 - 8.960: 98.6222% ( 1) 00:09:56.280 8.960 - 9.007: 98.6295% ( 1) 00:09:56.280 9.150 - 9.197: 98.6443% ( 2) 00:09:56.280 9.197 - 9.244: 98.6590% ( 2) 00:09:56.280 9.339 - 9.387: 98.6664% ( 1) 00:09:56.280 9.624 - 9.671: 98.6737% ( 1) 00:09:56.280 9.719 - 9.766: 98.6885% ( 2) 00:09:56.280 10.003 - 10.050: 98.6958% ( 1) 00:09:56.280 10.050 - 10.098: 98.7106% ( 2) 00:09:56.280 10.145 - 10.193: 98.7179% ( 1) 00:09:56.280 10.477 - 10.524: 98.7327% ( 2) 00:09:56.280 10.951 - 10.999: 98.7401% ( 1) 00:09:56.280 10.999 - 11.046: 98.7474% ( 1) 00:09:56.280 11.283 - 11.330: 98.7548% ( 1) 00:09:56.280 11.378 - 11.425: 98.7622% ( 1) 00:09:56.280 11.425 - 11.473: 98.7695% ( 1) 00:09:56.280 11.473 - 11.520: 98.7769% ( 1) 00:09:56.280 11.615 - 11.662: 98.7843% ( 1) 00:09:56.280 11.662 - 11.710: 98.7916% ( 1) 00:09:56.280 11.757 - 11.804: 98.7990% ( 1) 00:09:56.280 11.804 - 11.852: 98.8064% ( 1) 00:09:56.280 11.994 - 12.041: 98.8137% ( 1) 00:09:56.280 12.089 - 12.136: 98.8285% ( 2) 00:09:56.280 12.136 - 12.231: 98.8358% ( 1) 00:09:56.280 12.516 - 12.610: 98.8506% ( 2) 00:09:56.280 12.610 - 12.705: 98.8727% ( 3) 00:09:56.280 12.800 - 12.895: 98.8800% ( 1) 00:09:56.280 12.990 - 13.084: 98.8874% ( 1) 00:09:56.280 13.274 - 13.369: 98.8948% ( 1) 00:09:56.280 13.464 - 13.559: 98.9022% ( 1) 00:09:56.280 13.559 - 13.653: 98.9169% ( 2) 00:09:56.280 13.748 - 13.843: 98.9316% ( 2) 00:09:56.280 14.507 - 14.601: 98.9464% ( 2) 00:09:56.280 14.601 - 14.696: 98.9758% ( 4) 00:09:56.280 14.981 - 15.076: 98.9832% ( 1) 00:09:56.280 15.455 - 15.550: 98.9906% ( 1) 00:09:56.280 15.550 - 15.644: 99.0053% ( 2) 00:09:56.280 17.067 - 17.161: 99.0127% ( 1) 00:09:56.280 17.161 - 17.256: 99.0200% ( 1) 00:09:56.280 17.256 - 17.351: 99.0274% ( 1) 00:09:56.280 17.351 - 17.446: 99.0495% ( 3) 00:09:56.280 17.446 - 17.541: 99.0642% ( 2) 00:09:56.280 17.541 - 17.636: 99.1011% ( 5) 00:09:56.280 17.636 - 17.730: 99.1748% ( 10) 00:09:56.280 17.730 - 17.825: 99.1969% ( 3) 00:09:56.280 17.825 - 17.920: 99.2411% ( 6) 00:09:56.280 17.920 - 18.015: 99.3074% ( 9) 00:09:56.280 18.015 - 18.110: 99.3811% ( 10) 00:09:56.280 18.110 - 18.204: 99.4400% ( 8) 00:09:56.280 18.204 - 18.299: 99.5137% ( 10) 00:09:56.280 18.299 - 18.394: 99.5948% ( 11) 00:09:56.280 18.394 - 18.489: 99.6316% ( 5) 00:09:56.280 18.489 - 18.584: 99.6537% ( 3) 00:09:56.280 18.584 - 18.679: 99.6758% ( 3) 00:09:56.280 18.679 - 18.773: 99.7126% ( 5) 00:09:56.280 18.773 - 18.868: 99.7421% ( 4) 00:09:56.280 18.963 - 19.058: 99.7495% ( 1) 00:09:56.280 19.058 - 19.153: 99.7716% ( 3) 00:09:56.280 19.153 - 19.247: 99.7937% ( 3) 00:09:56.280 19.247 - 19.342: 99.8011% ( 1) 00:09:56.280 19.627 - 19.721: 99.8084% ( 1) 00:09:56.280 19.816 - 19.911: 99.8158% ( 1) 00:09:56.280 19.911 - 20.006: 99.8232% ( 1) 00:09:56.280 21.333 - 21.428: 99.8305% ( 1) 00:09:56.280 22.281 - 22.376: 99.8379% ( 1) 00:09:56.280 22.566 - 22.661: 99.8453% ( 1) 00:09:56.280 23.040 - 23.135: 99.8526% ( 1) 00:09:56.280 23.514 - 23.609: 99.8600% ( 1) 00:09:56.280 24.652 - 24.841: 99.8674% ( 1) 00:09:56.280 26.738 - 26.927: 99.8747% ( 1) 00:09:56.280 29.582 - 29.772: 99.8821% ( 1) 00:09:56.280 32.427 - 32.616: 99.8895% ( 1) 00:09:56.280 3980.705 - 4004.978: 99.9853% ( 13) 00:09:56.280 4004.978 - 4029.250: 100.0000% ( 2) 00:09:56.280 00:09:56.280 Complete histogram 00:09:56.280 ================== 00:09:56.280 Range in us Cumulative Count 00:09:56.280 2.050 - 2.062: 0.0516% ( 7) 00:09:56.280 2.062 - 2.074: 19.8718% ( 2690) 00:09:56.280 2.074 - 2.086: 31.8155% ( 1621) 00:09:56.280 2.086 - 2.098: 35.4406% ( 492) 00:09:56.280 2.098 - 2.110: 54.7893% ( 2626) 00:09:56.280 2.110 - 2.121: 61.3469% ( 890) 00:09:56.280 2.121 - 2.133: 64.6110% ( 443) 00:09:56.280 2.133 - 2.145: 75.3389% ( 1456) 00:09:56.280 2.145 - 2.157: 77.9988% ( 361) 00:09:56.280 2.157 - 2.169: 81.3292% ( 452) 00:09:56.280 2.169 - 2.181: 87.6363% ( 856) 00:09:56.280 2.181 - 2.193: 89.6036% ( 267) 00:09:56.280 2.193 - 2.204: 90.3699% ( 104) 00:09:56.280 2.204 - 2.216: 91.3719% ( 136) 00:09:56.280 2.216 - 2.228: 92.5214% ( 156) 00:09:56.280 2.228 - 2.240: 94.0760% ( 211) 00:09:56.280 2.240 - 2.252: 94.8792% ( 109) 00:09:56.280 2.252 - 2.264: 95.1223% ( 33) 00:09:56.280 2.264 - 2.276: 95.2255% ( 14) 00:09:56.280 2.276 - 2.287: 95.3360% ( 15) 00:09:56.280 2.287 - 2.299: 95.5570% ( 30) 00:09:56.280 2.299 - 2.311: 95.8665% ( 42) 00:09:56.280 2.311 - 2.323: 95.9844% ( 16) 00:09:56.280 2.323 - 2.335: 96.0139% ( 4) 00:09:56.280 2.335 - 2.347: 96.0949% ( 11) 00:09:56.280 2.347 - 2.359: 96.2938% ( 27) 00:09:56.280 2.359 - 2.370: 96.6770% ( 52) 00:09:56.280 2.370 - 2.382: 97.0896% ( 56) 00:09:56.280 2.382 - 2.394: 97.5022% ( 56) 00:09:56.280 2.394 - 2.406: 97.8264% ( 44) 00:09:56.280 2.406 - 2.418: 97.9590% ( 18) 00:09:56.280 2.418 - 2.430: 98.0843% ( 17) 00:09:56.280 2.430 - 2.441: 98.2243% ( 19) 00:09:56.280 2.441 - 2.453: 98.3127% ( 12) 00:09:56.280 2.453 - 2.465: 98.3790% ( 9) 00:09:56.280 2.465 - 2.477: 98.4232% ( 6) 00:09:56.280 2.477 - 2.489: 98.4453% ( 3) 00:09:56.280 2.489 - 2.501: 98.4527% ( 1) 00:09:56.280 2.501 - 2.513: 98.4601% ( 1) 00:09:56.280 2.524 - 2.536: 98.4895% ( 4) 00:09:56.280 2.548 - 2.560: 98.4969% ( 1) 00:09:56.280 2.584 - 2.596: 98.5190% ( 3) 00:09:56.280 2.619 - 2.631: 98.5337% ( 2) 00:09:56.280 2.631 - 2.643: 98.5411% ( 1) 00:09:56.280 2.655 - 2.667: 98.5485% ( 1) 00:09:56.280 2.714 - 2.726: 98.5559% ( 1) 00:09:56.280 2.726 - 2.738: 98.5632% ( 1) 00:09:56.280 2.761 - 2.773: 98.5706% ( 1) 00:09:56.280 2.821 - 2.833: 98.5780% ( 1) 00:09:56.280 3.022 - 3.034: 98.5853% ( 1) 00:09:56.280 3.295 - 3.319: 98.5927% ( 1) 00:09:56.280 3.319 - 3.342: 98.6074% ( 2) 00:09:56.280 3.437 - 3.461: 98.6148% ( 1) 00:09:56.280 3.461 - 3.484: 98.6369% ( 3) 00:09:56.281 3.484 - 3.508: 98.6443% ( 1) 00:09:56.281 3.532 - 3.556: 98.6516% ( 1) 00:09:56.281 3.556 - 3.579: 98.6664% ( 2) 00:09:56.281 3.579 - 3.603: 98.6811% ( 2) 00:09:56.281 3.603 - 3.627: 98.6958% ( 2) 00:09:56.281 3.627 - 3.650: 98.7106% ( 2) 00:09:56.281 3.650 - 3.674: 98.7253% ( 2) 00:09:56.281 3.674 - 3.698: 98.7327% ( 1) 00:09:56.281 3.745 - 3.769: 9[2024-07-15 22:33:39.374458] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:56.281 8.7401% ( 1) 00:09:56.281 3.769 - 3.793: 98.7474% ( 1) 00:09:56.281 3.816 - 3.840: 98.7548% ( 1) 00:09:56.281 3.864 - 3.887: 98.7622% ( 1) 00:09:56.281 3.887 - 3.911: 98.7769% ( 2) 00:09:56.281 3.935 - 3.959: 98.7916% ( 2) 00:09:56.281 4.006 - 4.030: 98.7990% ( 1) 00:09:56.281 4.077 - 4.101: 98.8064% ( 1) 00:09:56.281 4.219 - 4.243: 98.8137% ( 1) 00:09:56.281 5.049 - 5.073: 98.8211% ( 1) 00:09:56.281 5.144 - 5.167: 98.8285% ( 1) 00:09:56.281 5.736 - 5.760: 98.8358% ( 1) 00:09:56.281 5.831 - 5.855: 98.8432% ( 1) 00:09:56.281 5.902 - 5.926: 98.8506% ( 1) 00:09:56.281 5.973 - 5.997: 98.8579% ( 1) 00:09:56.281 6.068 - 6.116: 98.8727% ( 2) 00:09:56.281 6.210 - 6.258: 98.8874% ( 2) 00:09:56.281 6.305 - 6.353: 98.8948% ( 1) 00:09:56.281 6.400 - 6.447: 98.9022% ( 1) 00:09:56.281 6.495 - 6.542: 98.9095% ( 1) 00:09:56.281 6.590 - 6.637: 98.9243% ( 2) 00:09:56.281 6.732 - 6.779: 98.9316% ( 1) 00:09:56.281 6.874 - 6.921: 98.9390% ( 1) 00:09:56.281 7.016 - 7.064: 98.9464% ( 1) 00:09:56.281 7.159 - 7.206: 98.9537% ( 1) 00:09:56.281 7.680 - 7.727: 98.9611% ( 1) 00:09:56.281 8.059 - 8.107: 98.9685% ( 1) 00:09:56.281 9.766 - 9.813: 98.9758% ( 1) 00:09:56.281 15.360 - 15.455: 98.9832% ( 1) 00:09:56.281 15.455 - 15.550: 98.9979% ( 2) 00:09:56.281 15.644 - 15.739: 99.0053% ( 1) 00:09:56.281 15.739 - 15.834: 99.0274% ( 3) 00:09:56.281 15.834 - 15.929: 99.0421% ( 2) 00:09:56.281 15.929 - 16.024: 99.0569% ( 2) 00:09:56.281 16.024 - 16.119: 99.0790% ( 3) 00:09:56.281 16.119 - 16.213: 99.0864% ( 1) 00:09:56.281 16.213 - 16.308: 99.0937% ( 1) 00:09:56.281 16.308 - 16.403: 99.1379% ( 6) 00:09:56.281 16.403 - 16.498: 99.1674% ( 4) 00:09:56.281 16.498 - 16.593: 99.2116% ( 6) 00:09:56.281 16.593 - 16.687: 99.2411% ( 4) 00:09:56.281 16.687 - 16.782: 99.2853% ( 6) 00:09:56.281 16.782 - 16.877: 99.3000% ( 2) 00:09:56.281 16.877 - 16.972: 99.3221% ( 3) 00:09:56.281 16.972 - 17.067: 99.3295% ( 1) 00:09:56.281 17.067 - 17.161: 99.3590% ( 4) 00:09:56.281 17.161 - 17.256: 99.3663% ( 1) 00:09:56.281 17.256 - 17.351: 99.3811% ( 2) 00:09:56.281 17.351 - 17.446: 99.3958% ( 2) 00:09:56.281 17.541 - 17.636: 99.4032% ( 1) 00:09:56.281 17.636 - 17.730: 99.4106% ( 1) 00:09:56.281 17.825 - 17.920: 99.4179% ( 1) 00:09:56.281 18.015 - 18.110: 99.4253% ( 1) 00:09:56.281 18.110 - 18.204: 99.4400% ( 2) 00:09:56.281 18.299 - 18.394: 99.4474% ( 1) 00:09:56.281 18.489 - 18.584: 99.4548% ( 1) 00:09:56.281 18.773 - 18.868: 99.4695% ( 2) 00:09:56.281 19.058 - 19.153: 99.4769% ( 1) 00:09:56.281 24.652 - 24.841: 99.4842% ( 1) 00:09:56.281 3980.705 - 4004.978: 99.8821% ( 54) 00:09:56.281 4004.978 - 4029.250: 99.9926% ( 15) 00:09:56.281 4975.881 - 5000.154: 100.0000% ( 1) 00:09:56.281 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:56.281 [ 00:09:56.281 { 00:09:56.281 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:56.281 "subtype": "Discovery", 00:09:56.281 "listen_addresses": [], 00:09:56.281 "allow_any_host": true, 00:09:56.281 "hosts": [] 00:09:56.281 }, 00:09:56.281 { 00:09:56.281 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:56.281 "subtype": "NVMe", 00:09:56.281 "listen_addresses": [ 00:09:56.281 { 00:09:56.281 "trtype": "VFIOUSER", 00:09:56.281 "adrfam": "IPv4", 00:09:56.281 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:56.281 "trsvcid": "0" 00:09:56.281 } 00:09:56.281 ], 00:09:56.281 "allow_any_host": true, 00:09:56.281 "hosts": [], 00:09:56.281 "serial_number": "SPDK1", 00:09:56.281 "model_number": "SPDK bdev Controller", 00:09:56.281 "max_namespaces": 32, 00:09:56.281 "min_cntlid": 1, 00:09:56.281 "max_cntlid": 65519, 00:09:56.281 "namespaces": [ 00:09:56.281 { 00:09:56.281 "nsid": 1, 00:09:56.281 "bdev_name": "Malloc1", 00:09:56.281 "name": "Malloc1", 00:09:56.281 "nguid": "334D449804354C93A68C352D94E23FB9", 00:09:56.281 "uuid": "334d4498-0435-4c93-a68c-352d94e23fb9" 00:09:56.281 } 00:09:56.281 ] 00:09:56.281 }, 00:09:56.281 { 00:09:56.281 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:56.281 "subtype": "NVMe", 00:09:56.281 "listen_addresses": [ 00:09:56.281 { 00:09:56.281 "trtype": "VFIOUSER", 00:09:56.281 "adrfam": "IPv4", 00:09:56.281 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:56.281 "trsvcid": "0" 00:09:56.281 } 00:09:56.281 ], 00:09:56.281 "allow_any_host": true, 00:09:56.281 "hosts": [], 00:09:56.281 "serial_number": "SPDK2", 00:09:56.281 "model_number": "SPDK bdev Controller", 00:09:56.281 "max_namespaces": 32, 00:09:56.281 "min_cntlid": 1, 00:09:56.281 "max_cntlid": 65519, 00:09:56.281 "namespaces": [ 00:09:56.281 { 00:09:56.281 "nsid": 1, 00:09:56.281 "bdev_name": "Malloc2", 00:09:56.281 "name": "Malloc2", 00:09:56.281 "nguid": "6E308E16E7D64DE2B9C6312BE220D4B4", 00:09:56.281 "uuid": "6e308e16-e7d6-4de2-b9c6-312be220d4b4" 00:09:56.281 } 00:09:56.281 ] 00:09:56.281 } 00:09:56.281 ] 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1203606 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1259 -- # local i=0 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # return 0 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:56.281 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:56.539 [2024-07-15 22:33:39.857365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:56.539 Malloc3 00:09:56.539 22:33:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:56.797 [2024-07-15 22:33:40.205808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:56.797 22:33:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:56.797 Asynchronous Event Request test 00:09:56.797 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:56.797 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:56.797 Registering asynchronous event callbacks... 00:09:56.797 Starting namespace attribute notice tests for all controllers... 00:09:56.797 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:56.797 aer_cb - Changed Namespace 00:09:56.797 Cleaning up... 00:09:57.057 [ 00:09:57.057 { 00:09:57.057 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:57.057 "subtype": "Discovery", 00:09:57.057 "listen_addresses": [], 00:09:57.057 "allow_any_host": true, 00:09:57.057 "hosts": [] 00:09:57.057 }, 00:09:57.057 { 00:09:57.057 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:57.057 "subtype": "NVMe", 00:09:57.057 "listen_addresses": [ 00:09:57.057 { 00:09:57.057 "trtype": "VFIOUSER", 00:09:57.057 "adrfam": "IPv4", 00:09:57.057 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:57.057 "trsvcid": "0" 00:09:57.057 } 00:09:57.057 ], 00:09:57.057 "allow_any_host": true, 00:09:57.057 "hosts": [], 00:09:57.057 "serial_number": "SPDK1", 00:09:57.057 "model_number": "SPDK bdev Controller", 00:09:57.057 "max_namespaces": 32, 00:09:57.057 "min_cntlid": 1, 00:09:57.057 "max_cntlid": 65519, 00:09:57.057 "namespaces": [ 00:09:57.057 { 00:09:57.057 "nsid": 1, 00:09:57.057 "bdev_name": "Malloc1", 00:09:57.057 "name": "Malloc1", 00:09:57.057 "nguid": "334D449804354C93A68C352D94E23FB9", 00:09:57.057 "uuid": "334d4498-0435-4c93-a68c-352d94e23fb9" 00:09:57.057 }, 00:09:57.057 { 00:09:57.057 "nsid": 2, 00:09:57.057 "bdev_name": "Malloc3", 00:09:57.057 "name": "Malloc3", 00:09:57.057 "nguid": "FDB84C2675A24B09B94C2BC6976F55DB", 00:09:57.057 "uuid": "fdb84c26-75a2-4b09-b94c-2bc6976f55db" 00:09:57.057 } 00:09:57.057 ] 00:09:57.057 }, 00:09:57.057 { 00:09:57.057 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:57.057 "subtype": "NVMe", 00:09:57.057 "listen_addresses": [ 00:09:57.057 { 00:09:57.057 "trtype": "VFIOUSER", 00:09:57.057 "adrfam": "IPv4", 00:09:57.057 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:57.057 "trsvcid": "0" 00:09:57.057 } 00:09:57.057 ], 00:09:57.057 "allow_any_host": true, 00:09:57.057 "hosts": [], 00:09:57.057 "serial_number": "SPDK2", 00:09:57.057 "model_number": "SPDK bdev Controller", 00:09:57.057 "max_namespaces": 32, 00:09:57.057 "min_cntlid": 1, 00:09:57.057 "max_cntlid": 65519, 00:09:57.057 "namespaces": [ 00:09:57.057 { 00:09:57.057 "nsid": 1, 00:09:57.057 "bdev_name": "Malloc2", 00:09:57.057 "name": "Malloc2", 00:09:57.057 "nguid": "6E308E16E7D64DE2B9C6312BE220D4B4", 00:09:57.057 "uuid": "6e308e16-e7d6-4de2-b9c6-312be220d4b4" 00:09:57.057 } 00:09:57.057 ] 00:09:57.057 } 00:09:57.058 ] 00:09:57.058 22:33:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1203606 00:09:57.058 22:33:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:57.058 22:33:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:57.058 22:33:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:57.058 22:33:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:57.058 [2024-07-15 22:33:40.494399] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:09:57.058 [2024-07-15 22:33:40.494441] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203720 ] 00:09:57.058 [2024-07-15 22:33:40.528055] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:57.058 [2024-07-15 22:33:40.540011] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:57.058 [2024-07-15 22:33:40.540041] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4b2edf2000 00:09:57.058 [2024-07-15 22:33:40.541009] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:57.058 [2024-07-15 22:33:40.542021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:57.058 [2024-07-15 22:33:40.543025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:57.058 [2024-07-15 22:33:40.544035] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:57.058 [2024-07-15 22:33:40.545040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:57.058 [2024-07-15 22:33:40.546049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:57.058 [2024-07-15 22:33:40.547051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:57.058 [2024-07-15 22:33:40.548056] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:57.058 [2024-07-15 22:33:40.549072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:57.058 [2024-07-15 22:33:40.549094] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4b2ede7000 00:09:57.058 [2024-07-15 22:33:40.550263] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:57.319 [2024-07-15 22:33:40.567229] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:57.319 [2024-07-15 22:33:40.567265] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:57.319 [2024-07-15 22:33:40.569355] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:57.319 [2024-07-15 22:33:40.569410] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:57.319 [2024-07-15 22:33:40.569503] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:57.319 [2024-07-15 22:33:40.569530] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:57.319 [2024-07-15 22:33:40.569540] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:57.319 [2024-07-15 22:33:40.570360] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:57.319 [2024-07-15 22:33:40.570381] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:57.319 [2024-07-15 22:33:40.570394] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:57.319 [2024-07-15 22:33:40.571367] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:57.319 [2024-07-15 22:33:40.571387] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:57.319 [2024-07-15 22:33:40.571401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:57.319 [2024-07-15 22:33:40.572375] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:57.319 [2024-07-15 22:33:40.572396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:57.319 [2024-07-15 22:33:40.573377] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:57.319 [2024-07-15 22:33:40.573398] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:57.319 [2024-07-15 22:33:40.573408] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:57.319 [2024-07-15 22:33:40.573419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:57.319 [2024-07-15 22:33:40.573528] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:57.319 [2024-07-15 22:33:40.573536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:57.319 [2024-07-15 22:33:40.573545] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:57.319 [2024-07-15 22:33:40.574385] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:57.319 [2024-07-15 22:33:40.575386] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:57.319 [2024-07-15 22:33:40.576395] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:57.319 [2024-07-15 22:33:40.577385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:57.319 [2024-07-15 22:33:40.577465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:57.319 [2024-07-15 22:33:40.578404] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:57.319 [2024-07-15 22:33:40.578425] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:57.319 [2024-07-15 22:33:40.578439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.578463] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:57.319 [2024-07-15 22:33:40.578476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.578501] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:57.319 [2024-07-15 22:33:40.578511] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:57.319 [2024-07-15 22:33:40.578532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:57.319 [2024-07-15 22:33:40.584896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:57.319 [2024-07-15 22:33:40.584922] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:57.319 [2024-07-15 22:33:40.584936] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:57.319 [2024-07-15 22:33:40.584945] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:57.319 [2024-07-15 22:33:40.584953] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:57.319 [2024-07-15 22:33:40.584961] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:57.319 [2024-07-15 22:33:40.584969] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:57.319 [2024-07-15 22:33:40.584977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.584991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.585008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:57.319 [2024-07-15 22:33:40.592890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:57.319 [2024-07-15 22:33:40.592920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.319 [2024-07-15 22:33:40.592935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.319 [2024-07-15 22:33:40.592948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.319 [2024-07-15 22:33:40.592960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.319 [2024-07-15 22:33:40.592969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.592985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.593001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:57.319 [2024-07-15 22:33:40.600888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:57.319 [2024-07-15 22:33:40.600912] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:57.319 [2024-07-15 22:33:40.600923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.600935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.600947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.600961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:57.319 [2024-07-15 22:33:40.608886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:57.319 [2024-07-15 22:33:40.608960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.608997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.609012] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:57.319 [2024-07-15 22:33:40.609020] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:57.319 [2024-07-15 22:33:40.609031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:57.319 [2024-07-15 22:33:40.616889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:57.319 [2024-07-15 22:33:40.616914] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:57.319 [2024-07-15 22:33:40.616935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.616951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.616964] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:57.319 [2024-07-15 22:33:40.616973] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:57.319 [2024-07-15 22:33:40.616983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:57.319 [2024-07-15 22:33:40.624901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:57.319 [2024-07-15 22:33:40.624933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.624950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.624963] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:57.319 [2024-07-15 22:33:40.624972] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:57.319 [2024-07-15 22:33:40.624991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:57.319 [2024-07-15 22:33:40.632887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:57.319 [2024-07-15 22:33:40.632909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.632927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.632942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.632953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:09:57.319 [2024-07-15 22:33:40.632961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:57.320 [2024-07-15 22:33:40.632970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:57.320 [2024-07-15 22:33:40.632979] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:57.320 [2024-07-15 22:33:40.632987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:57.320 [2024-07-15 22:33:40.632995] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:57.320 [2024-07-15 22:33:40.633022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:57.320 [2024-07-15 22:33:40.640885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:57.320 [2024-07-15 22:33:40.640912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:57.320 [2024-07-15 22:33:40.648888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:57.320 [2024-07-15 22:33:40.648913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:57.320 [2024-07-15 22:33:40.656885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:57.320 [2024-07-15 22:33:40.656912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:57.320 [2024-07-15 22:33:40.664888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:57.320 [2024-07-15 22:33:40.664922] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:57.320 [2024-07-15 22:33:40.664934] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:57.320 [2024-07-15 22:33:40.664940] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:57.320 [2024-07-15 22:33:40.664946] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:57.320 [2024-07-15 22:33:40.664955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:57.320 [2024-07-15 22:33:40.664967] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:57.320 [2024-07-15 22:33:40.664975] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:57.320 [2024-07-15 22:33:40.664984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:57.320 [2024-07-15 22:33:40.664995] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:57.320 [2024-07-15 22:33:40.665003] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:57.320 [2024-07-15 22:33:40.665015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:57.320 [2024-07-15 22:33:40.665029] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:57.320 [2024-07-15 22:33:40.665037] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:57.320 [2024-07-15 22:33:40.665045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:57.320 [2024-07-15 22:33:40.672903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:57.320 [2024-07-15 22:33:40.672931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:57.320 [2024-07-15 22:33:40.672949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:57.320 [2024-07-15 22:33:40.672961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:57.320 ===================================================== 00:09:57.320 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:57.320 ===================================================== 00:09:57.320 Controller Capabilities/Features 00:09:57.320 ================================ 00:09:57.320 Vendor ID: 4e58 00:09:57.320 Subsystem Vendor ID: 4e58 00:09:57.320 Serial Number: SPDK2 00:09:57.320 Model Number: SPDK bdev Controller 00:09:57.320 Firmware Version: 24.09 00:09:57.320 Recommended Arb Burst: 6 00:09:57.320 IEEE OUI Identifier: 8d 6b 50 00:09:57.320 Multi-path I/O 00:09:57.320 May have multiple subsystem ports: Yes 00:09:57.320 May have multiple controllers: Yes 00:09:57.320 Associated with SR-IOV VF: No 00:09:57.320 Max Data Transfer Size: 131072 00:09:57.320 Max Number of Namespaces: 32 00:09:57.320 Max Number of I/O Queues: 127 00:09:57.320 NVMe Specification Version (VS): 1.3 00:09:57.320 NVMe Specification Version (Identify): 1.3 00:09:57.320 Maximum Queue Entries: 256 00:09:57.320 Contiguous Queues Required: Yes 00:09:57.320 Arbitration Mechanisms Supported 00:09:57.320 Weighted Round Robin: Not Supported 00:09:57.320 Vendor Specific: Not Supported 00:09:57.320 Reset Timeout: 15000 ms 00:09:57.320 Doorbell Stride: 4 bytes 00:09:57.320 NVM Subsystem Reset: Not Supported 00:09:57.320 Command Sets Supported 00:09:57.320 NVM Command Set: Supported 00:09:57.320 Boot Partition: Not Supported 00:09:57.320 Memory Page Size Minimum: 4096 bytes 00:09:57.320 Memory Page Size Maximum: 4096 bytes 00:09:57.320 Persistent Memory Region: Not Supported 00:09:57.320 Optional Asynchronous Events Supported 00:09:57.320 Namespace Attribute Notices: Supported 00:09:57.320 Firmware Activation Notices: Not Supported 00:09:57.320 ANA Change Notices: Not Supported 00:09:57.320 PLE Aggregate Log Change Notices: Not Supported 00:09:57.320 LBA Status Info Alert Notices: Not Supported 00:09:57.320 EGE Aggregate Log Change Notices: Not Supported 00:09:57.320 Normal NVM Subsystem Shutdown event: Not Supported 00:09:57.320 Zone Descriptor Change Notices: Not Supported 00:09:57.320 Discovery Log Change Notices: Not Supported 00:09:57.320 Controller Attributes 00:09:57.320 128-bit Host Identifier: Supported 00:09:57.320 Non-Operational Permissive Mode: Not Supported 00:09:57.320 NVM Sets: Not Supported 00:09:57.320 Read Recovery Levels: Not Supported 00:09:57.320 Endurance Groups: Not Supported 00:09:57.320 Predictable Latency Mode: Not Supported 00:09:57.320 Traffic Based Keep ALive: Not Supported 00:09:57.320 Namespace Granularity: Not Supported 00:09:57.320 SQ Associations: Not Supported 00:09:57.320 UUID List: Not Supported 00:09:57.320 Multi-Domain Subsystem: Not Supported 00:09:57.320 Fixed Capacity Management: Not Supported 00:09:57.320 Variable Capacity Management: Not Supported 00:09:57.320 Delete Endurance Group: Not Supported 00:09:57.320 Delete NVM Set: Not Supported 00:09:57.320 Extended LBA Formats Supported: Not Supported 00:09:57.320 Flexible Data Placement Supported: Not Supported 00:09:57.320 00:09:57.320 Controller Memory Buffer Support 00:09:57.320 ================================ 00:09:57.320 Supported: No 00:09:57.320 00:09:57.320 Persistent Memory Region Support 00:09:57.320 ================================ 00:09:57.320 Supported: No 00:09:57.320 00:09:57.320 Admin Command Set Attributes 00:09:57.320 ============================ 00:09:57.320 Security Send/Receive: Not Supported 00:09:57.320 Format NVM: Not Supported 00:09:57.320 Firmware Activate/Download: Not Supported 00:09:57.320 Namespace Management: Not Supported 00:09:57.320 Device Self-Test: Not Supported 00:09:57.320 Directives: Not Supported 00:09:57.320 NVMe-MI: Not Supported 00:09:57.320 Virtualization Management: Not Supported 00:09:57.320 Doorbell Buffer Config: Not Supported 00:09:57.320 Get LBA Status Capability: Not Supported 00:09:57.320 Command & Feature Lockdown Capability: Not Supported 00:09:57.320 Abort Command Limit: 4 00:09:57.320 Async Event Request Limit: 4 00:09:57.320 Number of Firmware Slots: N/A 00:09:57.320 Firmware Slot 1 Read-Only: N/A 00:09:57.320 Firmware Activation Without Reset: N/A 00:09:57.320 Multiple Update Detection Support: N/A 00:09:57.320 Firmware Update Granularity: No Information Provided 00:09:57.320 Per-Namespace SMART Log: No 00:09:57.320 Asymmetric Namespace Access Log Page: Not Supported 00:09:57.320 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:57.320 Command Effects Log Page: Supported 00:09:57.320 Get Log Page Extended Data: Supported 00:09:57.320 Telemetry Log Pages: Not Supported 00:09:57.320 Persistent Event Log Pages: Not Supported 00:09:57.320 Supported Log Pages Log Page: May Support 00:09:57.320 Commands Supported & Effects Log Page: Not Supported 00:09:57.320 Feature Identifiers & Effects Log Page:May Support 00:09:57.320 NVMe-MI Commands & Effects Log Page: May Support 00:09:57.320 Data Area 4 for Telemetry Log: Not Supported 00:09:57.320 Error Log Page Entries Supported: 128 00:09:57.320 Keep Alive: Supported 00:09:57.320 Keep Alive Granularity: 10000 ms 00:09:57.320 00:09:57.320 NVM Command Set Attributes 00:09:57.320 ========================== 00:09:57.320 Submission Queue Entry Size 00:09:57.320 Max: 64 00:09:57.320 Min: 64 00:09:57.320 Completion Queue Entry Size 00:09:57.320 Max: 16 00:09:57.320 Min: 16 00:09:57.320 Number of Namespaces: 32 00:09:57.320 Compare Command: Supported 00:09:57.320 Write Uncorrectable Command: Not Supported 00:09:57.320 Dataset Management Command: Supported 00:09:57.320 Write Zeroes Command: Supported 00:09:57.320 Set Features Save Field: Not Supported 00:09:57.320 Reservations: Not Supported 00:09:57.320 Timestamp: Not Supported 00:09:57.320 Copy: Supported 00:09:57.320 Volatile Write Cache: Present 00:09:57.320 Atomic Write Unit (Normal): 1 00:09:57.320 Atomic Write Unit (PFail): 1 00:09:57.320 Atomic Compare & Write Unit: 1 00:09:57.320 Fused Compare & Write: Supported 00:09:57.320 Scatter-Gather List 00:09:57.321 SGL Command Set: Supported (Dword aligned) 00:09:57.321 SGL Keyed: Not Supported 00:09:57.321 SGL Bit Bucket Descriptor: Not Supported 00:09:57.321 SGL Metadata Pointer: Not Supported 00:09:57.321 Oversized SGL: Not Supported 00:09:57.321 SGL Metadata Address: Not Supported 00:09:57.321 SGL Offset: Not Supported 00:09:57.321 Transport SGL Data Block: Not Supported 00:09:57.321 Replay Protected Memory Block: Not Supported 00:09:57.321 00:09:57.321 Firmware Slot Information 00:09:57.321 ========================= 00:09:57.321 Active slot: 1 00:09:57.321 Slot 1 Firmware Revision: 24.09 00:09:57.321 00:09:57.321 00:09:57.321 Commands Supported and Effects 00:09:57.321 ============================== 00:09:57.321 Admin Commands 00:09:57.321 -------------- 00:09:57.321 Get Log Page (02h): Supported 00:09:57.321 Identify (06h): Supported 00:09:57.321 Abort (08h): Supported 00:09:57.321 Set Features (09h): Supported 00:09:57.321 Get Features (0Ah): Supported 00:09:57.321 Asynchronous Event Request (0Ch): Supported 00:09:57.321 Keep Alive (18h): Supported 00:09:57.321 I/O Commands 00:09:57.321 ------------ 00:09:57.321 Flush (00h): Supported LBA-Change 00:09:57.321 Write (01h): Supported LBA-Change 00:09:57.321 Read (02h): Supported 00:09:57.321 Compare (05h): Supported 00:09:57.321 Write Zeroes (08h): Supported LBA-Change 00:09:57.321 Dataset Management (09h): Supported LBA-Change 00:09:57.321 Copy (19h): Supported LBA-Change 00:09:57.321 00:09:57.321 Error Log 00:09:57.321 ========= 00:09:57.321 00:09:57.321 Arbitration 00:09:57.321 =========== 00:09:57.321 Arbitration Burst: 1 00:09:57.321 00:09:57.321 Power Management 00:09:57.321 ================ 00:09:57.321 Number of Power States: 1 00:09:57.321 Current Power State: Power State #0 00:09:57.321 Power State #0: 00:09:57.321 Max Power: 0.00 W 00:09:57.321 Non-Operational State: Operational 00:09:57.321 Entry Latency: Not Reported 00:09:57.321 Exit Latency: Not Reported 00:09:57.321 Relative Read Throughput: 0 00:09:57.321 Relative Read Latency: 0 00:09:57.321 Relative Write Throughput: 0 00:09:57.321 Relative Write Latency: 0 00:09:57.321 Idle Power: Not Reported 00:09:57.321 Active Power: Not Reported 00:09:57.321 Non-Operational Permissive Mode: Not Supported 00:09:57.321 00:09:57.321 Health Information 00:09:57.321 ================== 00:09:57.321 Critical Warnings: 00:09:57.321 Available Spare Space: OK 00:09:57.321 Temperature: OK 00:09:57.321 Device Reliability: OK 00:09:57.321 Read Only: No 00:09:57.321 Volatile Memory Backup: OK 00:09:57.321 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:57.321 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:57.321 Available Spare: 0% 00:09:57.321 Available Sp[2024-07-15 22:33:40.673074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:57.321 [2024-07-15 22:33:40.680902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:57.321 [2024-07-15 22:33:40.680952] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:57.321 [2024-07-15 22:33:40.680970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.321 [2024-07-15 22:33:40.680981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.321 [2024-07-15 22:33:40.680991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.321 [2024-07-15 22:33:40.681001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.321 [2024-07-15 22:33:40.681090] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:57.321 [2024-07-15 22:33:40.681112] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:57.321 [2024-07-15 22:33:40.682091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:57.321 [2024-07-15 22:33:40.682179] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:57.321 [2024-07-15 22:33:40.682209] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:57.321 [2024-07-15 22:33:40.683098] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:57.321 [2024-07-15 22:33:40.683123] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:57.321 [2024-07-15 22:33:40.683190] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:57.321 [2024-07-15 22:33:40.685890] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:57.321 are Threshold: 0% 00:09:57.321 Life Percentage Used: 0% 00:09:57.321 Data Units Read: 0 00:09:57.321 Data Units Written: 0 00:09:57.321 Host Read Commands: 0 00:09:57.321 Host Write Commands: 0 00:09:57.321 Controller Busy Time: 0 minutes 00:09:57.321 Power Cycles: 0 00:09:57.321 Power On Hours: 0 hours 00:09:57.321 Unsafe Shutdowns: 0 00:09:57.321 Unrecoverable Media Errors: 0 00:09:57.321 Lifetime Error Log Entries: 0 00:09:57.321 Warning Temperature Time: 0 minutes 00:09:57.321 Critical Temperature Time: 0 minutes 00:09:57.321 00:09:57.321 Number of Queues 00:09:57.321 ================ 00:09:57.321 Number of I/O Submission Queues: 127 00:09:57.321 Number of I/O Completion Queues: 127 00:09:57.321 00:09:57.321 Active Namespaces 00:09:57.321 ================= 00:09:57.321 Namespace ID:1 00:09:57.321 Error Recovery Timeout: Unlimited 00:09:57.321 Command Set Identifier: NVM (00h) 00:09:57.321 Deallocate: Supported 00:09:57.321 Deallocated/Unwritten Error: Not Supported 00:09:57.321 Deallocated Read Value: Unknown 00:09:57.321 Deallocate in Write Zeroes: Not Supported 00:09:57.321 Deallocated Guard Field: 0xFFFF 00:09:57.321 Flush: Supported 00:09:57.321 Reservation: Supported 00:09:57.321 Namespace Sharing Capabilities: Multiple Controllers 00:09:57.321 Size (in LBAs): 131072 (0GiB) 00:09:57.321 Capacity (in LBAs): 131072 (0GiB) 00:09:57.321 Utilization (in LBAs): 131072 (0GiB) 00:09:57.321 NGUID: 6E308E16E7D64DE2B9C6312BE220D4B4 00:09:57.321 UUID: 6e308e16-e7d6-4de2-b9c6-312be220d4b4 00:09:57.321 Thin Provisioning: Not Supported 00:09:57.321 Per-NS Atomic Units: Yes 00:09:57.321 Atomic Boundary Size (Normal): 0 00:09:57.321 Atomic Boundary Size (PFail): 0 00:09:57.321 Atomic Boundary Offset: 0 00:09:57.321 Maximum Single Source Range Length: 65535 00:09:57.321 Maximum Copy Length: 65535 00:09:57.321 Maximum Source Range Count: 1 00:09:57.321 NGUID/EUI64 Never Reused: No 00:09:57.321 Namespace Write Protected: No 00:09:57.321 Number of LBA Formats: 1 00:09:57.321 Current LBA Format: LBA Format #00 00:09:57.321 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:57.321 00:09:57.321 22:33:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:57.580 [2024-07-15 22:33:40.913583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:02.853 Initializing NVMe Controllers 00:10:02.853 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:02.853 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:02.853 Initialization complete. Launching workers. 00:10:02.853 ======================================================== 00:10:02.853 Latency(us) 00:10:02.853 Device Information : IOPS MiB/s Average min max 00:10:02.853 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34968.10 136.59 3659.87 1175.98 9003.08 00:10:02.853 ======================================================== 00:10:02.853 Total : 34968.10 136.59 3659.87 1175.98 9003.08 00:10:02.853 00:10:02.853 [2024-07-15 22:33:46.016255] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:02.853 22:33:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:02.853 [2024-07-15 22:33:46.262895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:08.129 Initializing NVMe Controllers 00:10:08.129 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:08.129 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:08.129 Initialization complete. Launching workers. 00:10:08.129 ======================================================== 00:10:08.129 Latency(us) 00:10:08.129 Device Information : IOPS MiB/s Average min max 00:10:08.129 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32402.52 126.57 3949.63 1224.41 9654.48 00:10:08.129 ======================================================== 00:10:08.129 Total : 32402.52 126.57 3949.63 1224.41 9654.48 00:10:08.129 00:10:08.129 [2024-07-15 22:33:51.282788] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:08.129 22:33:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:08.129 [2024-07-15 22:33:51.503637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:13.404 [2024-07-15 22:33:56.633012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:13.404 Initializing NVMe Controllers 00:10:13.404 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:13.404 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:13.404 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:13.404 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:13.404 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:13.404 Initialization complete. Launching workers. 00:10:13.404 Starting thread on core 2 00:10:13.404 Starting thread on core 3 00:10:13.404 Starting thread on core 1 00:10:13.404 22:33:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:13.663 [2024-07-15 22:33:56.942338] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:17.004 [2024-07-15 22:34:00.005033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:17.004 Initializing NVMe Controllers 00:10:17.004 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.004 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.004 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:17.004 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:17.004 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:17.004 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:17.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:17.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:17.004 Initialization complete. Launching workers. 00:10:17.004 Starting thread on core 1 with urgent priority queue 00:10:17.004 Starting thread on core 2 with urgent priority queue 00:10:17.004 Starting thread on core 3 with urgent priority queue 00:10:17.004 Starting thread on core 0 with urgent priority queue 00:10:17.004 SPDK bdev Controller (SPDK2 ) core 0: 5106.33 IO/s 19.58 secs/100000 ios 00:10:17.004 SPDK bdev Controller (SPDK2 ) core 1: 5353.00 IO/s 18.68 secs/100000 ios 00:10:17.004 SPDK bdev Controller (SPDK2 ) core 2: 5587.33 IO/s 17.90 secs/100000 ios 00:10:17.004 SPDK bdev Controller (SPDK2 ) core 3: 5722.67 IO/s 17.47 secs/100000 ios 00:10:17.004 ======================================================== 00:10:17.004 00:10:17.004 22:34:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:17.004 [2024-07-15 22:34:00.314562] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:17.004 Initializing NVMe Controllers 00:10:17.004 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.004 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.004 Namespace ID: 1 size: 0GB 00:10:17.004 Initialization complete. 00:10:17.004 INFO: using host memory buffer for IO 00:10:17.004 Hello world! 00:10:17.004 [2024-07-15 22:34:00.323612] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:17.004 22:34:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:17.263 [2024-07-15 22:34:00.610209] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:18.641 Initializing NVMe Controllers 00:10:18.641 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:18.641 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:18.641 Initialization complete. Launching workers. 00:10:18.641 submit (in ns) avg, min, max = 8646.9, 3536.7, 4016940.0 00:10:18.641 complete (in ns) avg, min, max = 26826.6, 2065.6, 5010477.8 00:10:18.641 00:10:18.641 Submit histogram 00:10:18.641 ================ 00:10:18.641 Range in us Cumulative Count 00:10:18.641 3.532 - 3.556: 0.4014% ( 54) 00:10:18.641 3.556 - 3.579: 1.1894% ( 106) 00:10:18.641 3.579 - 3.603: 3.3155% ( 286) 00:10:18.642 3.603 - 3.627: 7.5305% ( 567) 00:10:18.642 3.627 - 3.650: 15.0907% ( 1017) 00:10:18.642 3.650 - 3.674: 23.0895% ( 1076) 00:10:18.642 3.674 - 3.698: 33.3185% ( 1376) 00:10:18.642 3.698 - 3.721: 41.8302% ( 1145) 00:10:18.642 3.721 - 3.745: 49.5614% ( 1040) 00:10:18.642 3.745 - 3.769: 55.5382% ( 804) 00:10:18.642 3.769 - 3.793: 60.9500% ( 728) 00:10:18.642 3.793 - 3.816: 65.1576% ( 566) 00:10:18.642 3.816 - 3.840: 68.6664% ( 472) 00:10:18.642 3.840 - 3.864: 71.9744% ( 445) 00:10:18.642 3.864 - 3.887: 75.3791% ( 458) 00:10:18.642 3.887 - 3.911: 78.9102% ( 475) 00:10:18.642 3.911 - 3.935: 82.5899% ( 495) 00:10:18.642 3.935 - 3.959: 85.2587% ( 359) 00:10:18.642 3.959 - 3.982: 87.3848% ( 286) 00:10:18.642 3.982 - 4.006: 89.3622% ( 266) 00:10:18.642 4.006 - 4.030: 91.1389% ( 239) 00:10:18.642 4.030 - 4.053: 92.4621% ( 178) 00:10:18.642 4.053 - 4.077: 93.6589% ( 161) 00:10:18.642 4.077 - 4.101: 94.5882% ( 125) 00:10:18.642 4.101 - 4.124: 95.3464% ( 102) 00:10:18.642 4.124 - 4.148: 95.9709% ( 84) 00:10:18.642 4.148 - 4.172: 96.3574% ( 52) 00:10:18.642 4.172 - 4.196: 96.6325% ( 37) 00:10:18.642 4.196 - 4.219: 96.7514% ( 16) 00:10:18.642 4.219 - 4.243: 96.8555% ( 14) 00:10:18.642 4.243 - 4.267: 96.9447% ( 12) 00:10:18.642 4.267 - 4.290: 97.0636% ( 16) 00:10:18.642 4.290 - 4.314: 97.1231% ( 8) 00:10:18.642 4.314 - 4.338: 97.2123% ( 12) 00:10:18.642 4.338 - 4.361: 97.2866% ( 10) 00:10:18.642 4.361 - 4.385: 97.3387% ( 7) 00:10:18.642 4.385 - 4.409: 97.3536% ( 2) 00:10:18.642 4.409 - 4.433: 97.3907% ( 5) 00:10:18.642 4.433 - 4.456: 97.4205% ( 4) 00:10:18.642 4.480 - 4.504: 97.4279% ( 1) 00:10:18.642 4.504 - 4.527: 97.4353% ( 1) 00:10:18.642 4.551 - 4.575: 97.4428% ( 1) 00:10:18.642 4.575 - 4.599: 97.4576% ( 2) 00:10:18.642 4.599 - 4.622: 97.4651% ( 1) 00:10:18.642 4.646 - 4.670: 97.4725% ( 1) 00:10:18.642 4.670 - 4.693: 97.4799% ( 1) 00:10:18.642 4.693 - 4.717: 97.4874% ( 1) 00:10:18.642 4.717 - 4.741: 97.4948% ( 1) 00:10:18.642 4.741 - 4.764: 97.5097% ( 2) 00:10:18.642 4.764 - 4.788: 97.5394% ( 4) 00:10:18.642 4.788 - 4.812: 97.5691% ( 4) 00:10:18.642 4.812 - 4.836: 97.6212% ( 7) 00:10:18.642 4.836 - 4.859: 97.6509% ( 4) 00:10:18.642 4.859 - 4.883: 97.7104% ( 8) 00:10:18.642 4.883 - 4.907: 97.7475% ( 5) 00:10:18.642 4.907 - 4.930: 97.7996% ( 7) 00:10:18.642 4.930 - 4.954: 97.8293% ( 4) 00:10:18.642 4.954 - 4.978: 97.8665% ( 5) 00:10:18.642 4.978 - 5.001: 97.8962% ( 4) 00:10:18.642 5.001 - 5.025: 97.9185% ( 3) 00:10:18.642 5.025 - 5.049: 97.9706% ( 7) 00:10:18.642 5.049 - 5.073: 97.9929% ( 3) 00:10:18.642 5.073 - 5.096: 98.0375% ( 6) 00:10:18.642 5.096 - 5.120: 98.0672% ( 4) 00:10:18.642 5.120 - 5.144: 98.0969% ( 4) 00:10:18.642 5.144 - 5.167: 98.1490% ( 7) 00:10:18.642 5.167 - 5.191: 98.1638% ( 2) 00:10:18.642 5.191 - 5.215: 98.1713% ( 1) 00:10:18.642 5.215 - 5.239: 98.2233% ( 7) 00:10:18.642 5.239 - 5.262: 98.2382% ( 2) 00:10:18.642 5.262 - 5.286: 98.2530% ( 2) 00:10:18.642 5.286 - 5.310: 98.2605% ( 1) 00:10:18.642 5.310 - 5.333: 98.2753% ( 2) 00:10:18.642 5.333 - 5.357: 98.2902% ( 2) 00:10:18.642 5.357 - 5.381: 98.3051% ( 2) 00:10:18.642 5.381 - 5.404: 98.3125% ( 1) 00:10:18.642 5.404 - 5.428: 98.3274% ( 2) 00:10:18.642 5.428 - 5.452: 98.3348% ( 1) 00:10:18.642 5.452 - 5.476: 98.3497% ( 2) 00:10:18.642 5.547 - 5.570: 98.3571% ( 1) 00:10:18.642 5.618 - 5.641: 98.3646% ( 1) 00:10:18.642 5.760 - 5.784: 98.3720% ( 1) 00:10:18.642 5.807 - 5.831: 98.3794% ( 1) 00:10:18.642 5.950 - 5.973: 98.3869% ( 1) 00:10:18.642 6.044 - 6.068: 98.3943% ( 1) 00:10:18.642 6.210 - 6.258: 98.4017% ( 1) 00:10:18.642 6.590 - 6.637: 98.4092% ( 1) 00:10:18.642 6.732 - 6.779: 98.4166% ( 1) 00:10:18.642 6.969 - 7.016: 98.4240% ( 1) 00:10:18.642 7.064 - 7.111: 98.4315% ( 1) 00:10:18.642 7.111 - 7.159: 98.4538% ( 3) 00:10:18.642 7.159 - 7.206: 98.4612% ( 1) 00:10:18.642 7.253 - 7.301: 98.4686% ( 1) 00:10:18.642 7.396 - 7.443: 98.4835% ( 2) 00:10:18.642 7.443 - 7.490: 98.5058% ( 3) 00:10:18.642 7.538 - 7.585: 98.5207% ( 2) 00:10:18.642 7.585 - 7.633: 98.5281% ( 1) 00:10:18.642 7.680 - 7.727: 98.5430% ( 2) 00:10:18.642 7.727 - 7.775: 98.5578% ( 2) 00:10:18.642 7.870 - 7.917: 98.5653% ( 1) 00:10:18.642 7.917 - 7.964: 98.5950% ( 4) 00:10:18.642 7.964 - 8.012: 98.6173% ( 3) 00:10:18.642 8.059 - 8.107: 98.6322% ( 2) 00:10:18.642 8.107 - 8.154: 98.6396% ( 1) 00:10:18.642 8.154 - 8.201: 98.6470% ( 1) 00:10:18.642 8.201 - 8.249: 98.6619% ( 2) 00:10:18.642 8.296 - 8.344: 98.6693% ( 1) 00:10:18.642 8.344 - 8.391: 98.6768% ( 1) 00:10:18.642 8.391 - 8.439: 98.6842% ( 1) 00:10:18.642 8.486 - 8.533: 98.6991% ( 2) 00:10:18.642 8.533 - 8.581: 98.7065% ( 1) 00:10:18.642 8.628 - 8.676: 98.7214% ( 2) 00:10:18.642 8.723 - 8.770: 98.7288% ( 1) 00:10:18.642 8.770 - 8.818: 98.7362% ( 1) 00:10:18.642 8.913 - 8.960: 98.7511% ( 2) 00:10:18.642 9.055 - 9.102: 98.7585% ( 1) 00:10:18.642 9.102 - 9.150: 98.7734% ( 2) 00:10:18.642 9.150 - 9.197: 98.7809% ( 1) 00:10:18.642 9.244 - 9.292: 98.7883% ( 1) 00:10:18.642 9.292 - 9.339: 98.8032% ( 2) 00:10:18.642 9.339 - 9.387: 98.8106% ( 1) 00:10:18.642 9.481 - 9.529: 98.8180% ( 1) 00:10:18.642 9.529 - 9.576: 98.8255% ( 1) 00:10:18.642 9.719 - 9.766: 98.8403% ( 2) 00:10:18.642 9.813 - 9.861: 98.8478% ( 1) 00:10:18.642 9.861 - 9.908: 98.8552% ( 1) 00:10:18.642 9.908 - 9.956: 98.8626% ( 1) 00:10:18.642 10.003 - 10.050: 98.8701% ( 1) 00:10:18.642 10.050 - 10.098: 98.8775% ( 1) 00:10:18.642 10.145 - 10.193: 98.8998% ( 3) 00:10:18.642 10.572 - 10.619: 98.9221% ( 3) 00:10:18.642 10.904 - 10.951: 98.9370% ( 2) 00:10:18.642 10.951 - 10.999: 98.9444% ( 1) 00:10:18.642 10.999 - 11.046: 98.9518% ( 1) 00:10:18.642 11.046 - 11.093: 98.9667% ( 2) 00:10:18.642 11.378 - 11.425: 98.9741% ( 1) 00:10:18.642 11.473 - 11.520: 98.9816% ( 1) 00:10:18.642 11.899 - 11.947: 98.9890% ( 1) 00:10:18.642 12.089 - 12.136: 98.9964% ( 1) 00:10:18.642 12.421 - 12.516: 99.0187% ( 3) 00:10:18.642 13.179 - 13.274: 99.0262% ( 1) 00:10:18.642 13.559 - 13.653: 99.0336% ( 1) 00:10:18.642 13.843 - 13.938: 99.0410% ( 1) 00:10:18.642 13.938 - 14.033: 99.0559% ( 2) 00:10:18.642 14.033 - 14.127: 99.0633% ( 1) 00:10:18.642 14.127 - 14.222: 99.0708% ( 1) 00:10:18.642 14.412 - 14.507: 99.0782% ( 1) 00:10:18.642 14.791 - 14.886: 99.0856% ( 1) 00:10:18.642 14.981 - 15.076: 99.0931% ( 1) 00:10:18.642 15.929 - 16.024: 99.1079% ( 2) 00:10:18.642 17.067 - 17.161: 99.1154% ( 1) 00:10:18.642 17.161 - 17.256: 99.1377% ( 3) 00:10:18.642 17.256 - 17.351: 99.1525% ( 2) 00:10:18.642 17.351 - 17.446: 99.1600% ( 1) 00:10:18.642 17.446 - 17.541: 99.1897% ( 4) 00:10:18.642 17.541 - 17.636: 99.2194% ( 4) 00:10:18.642 17.636 - 17.730: 99.2566% ( 5) 00:10:18.642 17.730 - 17.825: 99.2864% ( 4) 00:10:18.642 17.825 - 17.920: 99.3533% ( 9) 00:10:18.642 17.920 - 18.015: 99.3979% ( 6) 00:10:18.642 18.015 - 18.110: 99.4350% ( 5) 00:10:18.642 18.110 - 18.204: 99.4796% ( 6) 00:10:18.642 18.204 - 18.299: 99.5465% ( 9) 00:10:18.642 18.299 - 18.394: 99.5986% ( 7) 00:10:18.642 18.394 - 18.489: 99.6803% ( 11) 00:10:18.642 18.489 - 18.584: 99.7324% ( 7) 00:10:18.642 18.584 - 18.679: 99.7398% ( 1) 00:10:18.642 18.679 - 18.773: 99.8067% ( 9) 00:10:18.642 18.773 - 18.868: 99.8142% ( 1) 00:10:18.642 18.868 - 18.963: 99.8216% ( 1) 00:10:18.642 18.963 - 19.058: 99.8365% ( 2) 00:10:18.642 19.247 - 19.342: 99.8513% ( 2) 00:10:18.642 21.713 - 21.807: 99.8588% ( 1) 00:10:18.642 23.135 - 23.230: 99.8662% ( 1) 00:10:18.642 28.634 - 28.824: 99.8736% ( 1) 00:10:18.642 29.203 - 29.393: 99.8811% ( 1) 00:10:18.642 3179.710 - 3203.982: 99.8885% ( 1) 00:10:18.642 3980.705 - 4004.978: 99.9777% ( 12) 00:10:18.642 4004.978 - 4029.250: 100.0000% ( 3) 00:10:18.642 00:10:18.642 Complete histogram 00:10:18.642 ================== 00:10:18.642 Range in us Cumulative Count 00:10:18.642 2.062 - 2.074: 1.7470% ( 235) 00:10:18.642 2.074 - 2.086: 25.9069% ( 3250) 00:10:18.642 2.086 - 2.098: 36.3292% ( 1402) 00:10:18.642 2.098 - 2.110: 43.0345% ( 902) 00:10:18.642 2.110 - 2.121: 57.8427% ( 1992) 00:10:18.642 2.121 - 2.133: 61.9239% ( 549) 00:10:18.642 2.133 - 2.145: 66.6890% ( 641) 00:10:18.642 2.145 - 2.157: 77.1112% ( 1402) 00:10:18.642 2.157 - 2.169: 79.4529% ( 315) 00:10:18.642 2.169 - 2.181: 83.2293% ( 508) 00:10:18.642 2.181 - 2.193: 87.9423% ( 634) 00:10:18.642 2.193 - 2.204: 89.5480% ( 216) 00:10:18.642 2.204 - 2.216: 90.5219% ( 131) 00:10:18.642 2.216 - 2.228: 91.8005% ( 172) 00:10:18.642 2.228 - 2.240: 93.5772% ( 239) 00:10:18.642 2.240 - 2.252: 94.6179% ( 140) 00:10:18.643 2.252 - 2.264: 94.9673% ( 47) 00:10:18.643 2.264 - 2.276: 95.1977% ( 31) 00:10:18.643 2.276 - 2.287: 95.3464% ( 20) 00:10:18.643 2.287 - 2.299: 95.5769% ( 31) 00:10:18.643 2.299 - 2.311: 95.8445% ( 36) 00:10:18.643 2.311 - 2.323: 96.0155% ( 23) 00:10:18.643 2.323 - 2.335: 96.0526% ( 5) 00:10:18.643 2.335 - 2.347: 96.1270% ( 10) 00:10:18.643 2.347 - 2.359: 96.3128% ( 25) 00:10:18.643 2.359 - 2.370: 96.5656% ( 34) 00:10:18.643 2.370 - 2.382: 96.8258% ( 35) 00:10:18.643 2.382 - 2.394: 97.1528% ( 44) 00:10:18.643 2.394 - 2.406: 97.3907% ( 32) 00:10:18.643 2.406 - 2.418: 97.6063% ( 29) 00:10:18.643 2.418 - 2.430: 97.8145% ( 28) 00:10:18.643 2.430 - 2.441: 97.9334% ( 16) 00:10:18.643 2.441 - 2.453: 98.0821% ( 20) 00:10:18.643 2.453 - 2.465: 98.2159% ( 18) 00:10:18.643 2.465 - 2.477: 98.3200% ( 14) 00:10:18.643 2.477 - 2.489: 98.3794% ( 8) 00:10:18.643 2.489 - 2.501: 98.4092% ( 4) 00:10:18.643 2.501 - 2.513: 98.4686% ( 8) 00:10:18.643 2.513 - 2.524: 98.4909% ( 3) 00:10:18.643 2.524 - 2.536: 98.4984% ( 1) 00:10:18.643 2.536 - 2.548: 98.5132% ( 2) 00:10:18.643 2.548 - 2.560: 98.5281% ( 2) 00:10:18.643 2.584 - 2.596: 98.5355% ( 1) 00:10:18.643 2.702 - 2.714: 98.5430% ( 1) 00:10:18.643 2.714 - 2.726: 98.5504% ( 1) 00:10:18.643 2.726 - 2.738: 98.5653% ( 2) 00:10:18.643 2.773 - 2.785: 98.5727% ( 1) 00:10:18.643 2.880 - 2.892: 98.5801% ( 1) 00:10:18.643 3.105 - 3.129: 98.5876% ( 1) 00:10:18.643 3.176 - 3.200: 98.5950% ( 1) 00:10:18.643 3.413 - 3.437: 98.6024% ( 1) 00:10:18.643 3.437 - 3.461: 98.6099% ( 1) 00:10:18.643 3.484 - 3.508: 98.6173% ( 1) 00:10:18.643 3.532 - 3.556: 98.6247% ( 1) 00:10:18.643 3.556 - 3.579: 98.6322% ( 1) 00:10:18.643 3.579 - 3.603: 98.6396% ( 1) 00:10:18.643 3.674 - 3.698: 98.6545% ( 2) 00:10:18.643 3.698 - 3.721: 98.6619% ( 1) 00:10:18.643 3.721 - 3.745: 98.6693% ( 1) 00:10:18.643 3.769 - 3.793: 98.6842% ( 2) 00:10:18.643 3.793 - 3.816: 98.6916% ( 1) 00:10:18.643 3.816 - 3.840: 98.6991% ( 1) 00:10:18.643 3.864 - 3.887: 98.7065% ( 1) 00:10:18.643 3.887 - 3.911: 98.7214% ( 2) 00:10:18.643 3.911 - 3.935: 98.7362% ( 2) 00:10:18.643 4.053 - 4.077: 98.7437% ( 1) 00:10:18.643 4.077 - 4.101: 98.7511% ( 1) 00:10:18.643 4.267 - 4.290: 98.7585% ( 1) 00:10:18.643 5.025 - 5.049: 98.7660% ( 1) 00:10:18.643 5.073 - 5.096: 98.7734% ( 1) 00:10:18.643 5.665 - 5.689: 98.7809% ( 1) 00:10:18.643 5.713 - 5.736: 98.7883% ( 1) 00:10:18.643 5.855 - 5.879: 9[2024-07-15 22:34:01.711979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:18.643 8.7957% ( 1) 00:10:18.643 6.021 - 6.044: 98.8032% ( 1) 00:10:18.643 6.258 - 6.305: 98.8106% ( 1) 00:10:18.643 6.305 - 6.353: 98.8180% ( 1) 00:10:18.643 6.921 - 6.969: 98.8255% ( 1) 00:10:18.643 7.016 - 7.064: 98.8329% ( 1) 00:10:18.643 7.585 - 7.633: 98.8403% ( 1) 00:10:18.643 7.727 - 7.775: 98.8478% ( 1) 00:10:18.643 15.550 - 15.644: 98.8552% ( 1) 00:10:18.643 15.739 - 15.834: 98.8775% ( 3) 00:10:18.643 15.834 - 15.929: 98.8849% ( 1) 00:10:18.643 15.929 - 16.024: 98.9147% ( 4) 00:10:18.643 16.024 - 16.119: 98.9370% ( 3) 00:10:18.643 16.119 - 16.213: 98.9667% ( 4) 00:10:18.643 16.213 - 16.308: 99.0113% ( 6) 00:10:18.643 16.308 - 16.403: 99.0336% ( 3) 00:10:18.643 16.403 - 16.498: 99.1154% ( 11) 00:10:18.643 16.498 - 16.593: 99.1451% ( 4) 00:10:18.643 16.593 - 16.687: 99.1971% ( 7) 00:10:18.643 16.687 - 16.782: 99.2566% ( 8) 00:10:18.643 16.782 - 16.877: 99.2864% ( 4) 00:10:18.643 16.877 - 16.972: 99.2938% ( 1) 00:10:18.643 16.972 - 17.067: 99.3161% ( 3) 00:10:18.643 17.161 - 17.256: 99.3384% ( 3) 00:10:18.643 17.256 - 17.351: 99.3458% ( 1) 00:10:18.643 17.351 - 17.446: 99.3533% ( 1) 00:10:18.643 17.446 - 17.541: 99.3607% ( 1) 00:10:18.643 18.015 - 18.110: 99.3756% ( 2) 00:10:18.643 18.299 - 18.394: 99.3830% ( 1) 00:10:18.643 3021.938 - 3034.074: 99.3904% ( 1) 00:10:18.643 3070.483 - 3082.619: 99.3979% ( 1) 00:10:18.643 3980.705 - 4004.978: 99.9257% ( 71) 00:10:18.643 4004.978 - 4029.250: 99.9926% ( 9) 00:10:18.643 5000.154 - 5024.427: 100.0000% ( 1) 00:10:18.643 00:10:18.643 22:34:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:18.643 22:34:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:18.643 22:34:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:18.643 22:34:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:18.643 22:34:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:18.643 [ 00:10:18.643 { 00:10:18.643 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:18.643 "subtype": "Discovery", 00:10:18.643 "listen_addresses": [], 00:10:18.643 "allow_any_host": true, 00:10:18.643 "hosts": [] 00:10:18.643 }, 00:10:18.643 { 00:10:18.643 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:18.643 "subtype": "NVMe", 00:10:18.643 "listen_addresses": [ 00:10:18.643 { 00:10:18.643 "trtype": "VFIOUSER", 00:10:18.643 "adrfam": "IPv4", 00:10:18.643 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:18.643 "trsvcid": "0" 00:10:18.643 } 00:10:18.643 ], 00:10:18.643 "allow_any_host": true, 00:10:18.643 "hosts": [], 00:10:18.643 "serial_number": "SPDK1", 00:10:18.643 "model_number": "SPDK bdev Controller", 00:10:18.643 "max_namespaces": 32, 00:10:18.643 "min_cntlid": 1, 00:10:18.643 "max_cntlid": 65519, 00:10:18.643 "namespaces": [ 00:10:18.643 { 00:10:18.643 "nsid": 1, 00:10:18.643 "bdev_name": "Malloc1", 00:10:18.643 "name": "Malloc1", 00:10:18.643 "nguid": "334D449804354C93A68C352D94E23FB9", 00:10:18.643 "uuid": "334d4498-0435-4c93-a68c-352d94e23fb9" 00:10:18.643 }, 00:10:18.643 { 00:10:18.643 "nsid": 2, 00:10:18.643 "bdev_name": "Malloc3", 00:10:18.643 "name": "Malloc3", 00:10:18.643 "nguid": "FDB84C2675A24B09B94C2BC6976F55DB", 00:10:18.643 "uuid": "fdb84c26-75a2-4b09-b94c-2bc6976f55db" 00:10:18.643 } 00:10:18.643 ] 00:10:18.643 }, 00:10:18.643 { 00:10:18.643 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:18.643 "subtype": "NVMe", 00:10:18.643 "listen_addresses": [ 00:10:18.643 { 00:10:18.643 "trtype": "VFIOUSER", 00:10:18.643 "adrfam": "IPv4", 00:10:18.643 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:18.643 "trsvcid": "0" 00:10:18.643 } 00:10:18.643 ], 00:10:18.643 "allow_any_host": true, 00:10:18.643 "hosts": [], 00:10:18.643 "serial_number": "SPDK2", 00:10:18.643 "model_number": "SPDK bdev Controller", 00:10:18.643 "max_namespaces": 32, 00:10:18.643 "min_cntlid": 1, 00:10:18.643 "max_cntlid": 65519, 00:10:18.643 "namespaces": [ 00:10:18.643 { 00:10:18.643 "nsid": 1, 00:10:18.643 "bdev_name": "Malloc2", 00:10:18.643 "name": "Malloc2", 00:10:18.643 "nguid": "6E308E16E7D64DE2B9C6312BE220D4B4", 00:10:18.643 "uuid": "6e308e16-e7d6-4de2-b9c6-312be220d4b4" 00:10:18.643 } 00:10:18.643 ] 00:10:18.643 } 00:10:18.643 ] 00:10:18.643 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:18.643 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1206220 00:10:18.643 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:18.643 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:18.643 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1259 -- # local i=0 00:10:18.643 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:18.643 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:18.643 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # return 0 00:10:18.643 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:18.643 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:18.901 [2024-07-15 22:34:02.165341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:18.901 Malloc4 00:10:18.901 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:19.159 [2024-07-15 22:34:02.522132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:19.159 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:19.159 Asynchronous Event Request test 00:10:19.159 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:19.159 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:19.159 Registering asynchronous event callbacks... 00:10:19.159 Starting namespace attribute notice tests for all controllers... 00:10:19.159 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:19.159 aer_cb - Changed Namespace 00:10:19.159 Cleaning up... 00:10:19.417 [ 00:10:19.417 { 00:10:19.417 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:19.417 "subtype": "Discovery", 00:10:19.417 "listen_addresses": [], 00:10:19.417 "allow_any_host": true, 00:10:19.417 "hosts": [] 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:19.417 "subtype": "NVMe", 00:10:19.417 "listen_addresses": [ 00:10:19.417 { 00:10:19.417 "trtype": "VFIOUSER", 00:10:19.417 "adrfam": "IPv4", 00:10:19.417 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:19.417 "trsvcid": "0" 00:10:19.417 } 00:10:19.417 ], 00:10:19.417 "allow_any_host": true, 00:10:19.417 "hosts": [], 00:10:19.417 "serial_number": "SPDK1", 00:10:19.417 "model_number": "SPDK bdev Controller", 00:10:19.417 "max_namespaces": 32, 00:10:19.417 "min_cntlid": 1, 00:10:19.417 "max_cntlid": 65519, 00:10:19.417 "namespaces": [ 00:10:19.417 { 00:10:19.417 "nsid": 1, 00:10:19.417 "bdev_name": "Malloc1", 00:10:19.417 "name": "Malloc1", 00:10:19.417 "nguid": "334D449804354C93A68C352D94E23FB9", 00:10:19.417 "uuid": "334d4498-0435-4c93-a68c-352d94e23fb9" 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "nsid": 2, 00:10:19.417 "bdev_name": "Malloc3", 00:10:19.418 "name": "Malloc3", 00:10:19.418 "nguid": "FDB84C2675A24B09B94C2BC6976F55DB", 00:10:19.418 "uuid": "fdb84c26-75a2-4b09-b94c-2bc6976f55db" 00:10:19.418 } 00:10:19.418 ] 00:10:19.418 }, 00:10:19.418 { 00:10:19.418 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:19.418 "subtype": "NVMe", 00:10:19.418 "listen_addresses": [ 00:10:19.418 { 00:10:19.418 "trtype": "VFIOUSER", 00:10:19.418 "adrfam": "IPv4", 00:10:19.418 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:19.418 "trsvcid": "0" 00:10:19.418 } 00:10:19.418 ], 00:10:19.418 "allow_any_host": true, 00:10:19.418 "hosts": [], 00:10:19.418 "serial_number": "SPDK2", 00:10:19.418 "model_number": "SPDK bdev Controller", 00:10:19.418 "max_namespaces": 32, 00:10:19.418 "min_cntlid": 1, 00:10:19.418 "max_cntlid": 65519, 00:10:19.418 "namespaces": [ 00:10:19.418 { 00:10:19.418 "nsid": 1, 00:10:19.418 "bdev_name": "Malloc2", 00:10:19.418 "name": "Malloc2", 00:10:19.418 "nguid": "6E308E16E7D64DE2B9C6312BE220D4B4", 00:10:19.418 "uuid": "6e308e16-e7d6-4de2-b9c6-312be220d4b4" 00:10:19.418 }, 00:10:19.418 { 00:10:19.418 "nsid": 2, 00:10:19.418 "bdev_name": "Malloc4", 00:10:19.418 "name": "Malloc4", 00:10:19.418 "nguid": "976FEF77FAA94A4F864028D095CD6A26", 00:10:19.418 "uuid": "976fef77-faa9-4a4f-8640-28d095cd6a26" 00:10:19.418 } 00:10:19.418 ] 00:10:19.418 } 00:10:19.418 ] 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1206220 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1200658 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@942 -- # '[' -z 1200658 ']' 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # kill -0 1200658 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # uname 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1200658 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1200658' 00:10:19.418 killing process with pid 1200658 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@961 -- # kill 1200658 00:10:19.418 22:34:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # wait 1200658 00:10:19.676 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:19.676 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1206406 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1206406' 00:10:19.940 Process pid: 1206406 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1206406 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@823 -- # '[' -z 1206406 ']' 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.940 22:34:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:19.941 22:34:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:19.941 [2024-07-15 22:34:03.219583] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:19.941 [2024-07-15 22:34:03.220576] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:10:19.941 [2024-07-15 22:34:03.220629] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.941 [2024-07-15 22:34:03.282137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.941 [2024-07-15 22:34:03.392763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.941 [2024-07-15 22:34:03.392822] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.941 [2024-07-15 22:34:03.392851] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.941 [2024-07-15 22:34:03.392862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.941 [2024-07-15 22:34:03.392873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.941 [2024-07-15 22:34:03.392937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.941 [2024-07-15 22:34:03.394899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.941 [2024-07-15 22:34:03.394969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.941 [2024-07-15 22:34:03.394972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.201 [2024-07-15 22:34:03.500660] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:20.201 [2024-07-15 22:34:03.500861] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:20.201 [2024-07-15 22:34:03.501217] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:20.201 [2024-07-15 22:34:03.501860] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:20.201 [2024-07-15 22:34:03.502131] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:20.201 22:34:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:20.201 22:34:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # return 0 00:10:20.201 22:34:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:21.136 22:34:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:21.394 22:34:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:21.394 22:34:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:21.394 22:34:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:21.394 22:34:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:21.395 22:34:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:21.652 Malloc1 00:10:21.652 22:34:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:21.922 22:34:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:22.180 22:34:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:22.437 22:34:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:22.437 22:34:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:22.437 22:34:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:22.694 Malloc2 00:10:22.694 22:34:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:22.951 22:34:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:23.208 22:34:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1206406 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@942 -- # '[' -z 1206406 ']' 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # kill -0 1206406 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # uname 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1206406 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1206406' 00:10:23.467 killing process with pid 1206406 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@961 -- # kill 1206406 00:10:23.467 22:34:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # wait 1206406 00:10:23.726 22:34:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:23.726 22:34:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:23.726 00:10:23.726 real 0m52.513s 00:10:23.726 user 3m27.123s 00:10:23.726 sys 0m4.243s 00:10:23.726 22:34:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:23.726 22:34:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:23.726 ************************************ 00:10:23.726 END TEST nvmf_vfio_user 00:10:23.726 ************************************ 00:10:23.726 22:34:07 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:10:23.726 22:34:07 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:23.726 22:34:07 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:23.726 22:34:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:23.726 22:34:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:23.984 ************************************ 00:10:23.984 START TEST nvmf_vfio_user_nvme_compliance 00:10:23.984 ************************************ 00:10:23.984 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:23.984 * Looking for test storage... 00:10:23.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:23.984 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.984 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:23.984 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.984 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.984 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.984 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.984 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1206889 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1206889' 00:10:23.985 Process pid: 1206889 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1206889 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@823 -- # '[' -z 1206889 ']' 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:23.985 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:23.985 [2024-07-15 22:34:07.366641] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:10:23.985 [2024-07-15 22:34:07.366731] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.985 [2024-07-15 22:34:07.423674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:24.244 [2024-07-15 22:34:07.529971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.244 [2024-07-15 22:34:07.530027] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.244 [2024-07-15 22:34:07.530055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.244 [2024-07-15 22:34:07.530066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.244 [2024-07-15 22:34:07.530075] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.244 [2024-07-15 22:34:07.530128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.244 [2024-07-15 22:34:07.530191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.244 [2024-07-15 22:34:07.530197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.244 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:24.244 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # return 0 00:10:24.244 22:34:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:25.179 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:25.179 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:25.179 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:25.179 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:25.179 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:25.179 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:25.179 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:25.179 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:25.179 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:25.179 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:25.439 malloc0 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:25.439 22:34:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:25.439 00:10:25.439 00:10:25.439 CUnit - A unit testing framework for C - Version 2.1-3 00:10:25.439 http://cunit.sourceforge.net/ 00:10:25.439 00:10:25.439 00:10:25.439 Suite: nvme_compliance 00:10:25.439 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 22:34:08.871446] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.439 [2024-07-15 22:34:08.872953] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:25.439 [2024-07-15 22:34:08.872980] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:25.439 [2024-07-15 22:34:08.872992] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:25.439 [2024-07-15 22:34:08.877487] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.439 passed 00:10:25.699 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 22:34:08.961091] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.699 [2024-07-15 22:34:08.964116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.699 passed 00:10:25.699 Test: admin_identify_ns ...[2024-07-15 22:34:09.051372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.699 [2024-07-15 22:34:09.108894] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:25.699 [2024-07-15 22:34:09.118894] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:25.699 [2024-07-15 22:34:09.140033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.699 passed 00:10:25.958 Test: admin_get_features_mandatory_features ...[2024-07-15 22:34:09.223505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.958 [2024-07-15 22:34:09.226524] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.958 passed 00:10:25.958 Test: admin_get_features_optional_features ...[2024-07-15 22:34:09.310040] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.958 [2024-07-15 22:34:09.313058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.958 passed 00:10:25.958 Test: admin_set_features_number_of_queues ...[2024-07-15 22:34:09.394350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.216 [2024-07-15 22:34:09.497979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.216 passed 00:10:26.216 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 22:34:09.583480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.216 [2024-07-15 22:34:09.586505] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.216 passed 00:10:26.216 Test: admin_get_log_page_with_lpo ...[2024-07-15 22:34:09.669553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.475 [2024-07-15 22:34:09.737897] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:26.475 [2024-07-15 22:34:09.750979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.475 passed 00:10:26.475 Test: fabric_property_get ...[2024-07-15 22:34:09.833499] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.475 [2024-07-15 22:34:09.834774] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:26.475 [2024-07-15 22:34:09.836519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.475 passed 00:10:26.475 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 22:34:09.920047] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.475 [2024-07-15 22:34:09.921348] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:26.475 [2024-07-15 22:34:09.923067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.475 passed 00:10:26.735 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 22:34:10.007379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.735 [2024-07-15 22:34:10.090891] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:26.735 [2024-07-15 22:34:10.106888] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:26.735 [2024-07-15 22:34:10.112010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.735 passed 00:10:26.735 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 22:34:10.196644] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.735 [2024-07-15 22:34:10.197960] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:26.735 [2024-07-15 22:34:10.199664] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.735 passed 00:10:26.994 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 22:34:10.279804] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.994 [2024-07-15 22:34:10.356889] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:26.994 [2024-07-15 22:34:10.380889] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:26.994 [2024-07-15 22:34:10.385981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.994 passed 00:10:26.994 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 22:34:10.469500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.994 [2024-07-15 22:34:10.470793] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:26.994 [2024-07-15 22:34:10.470846] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:26.994 [2024-07-15 22:34:10.472528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:27.253 passed 00:10:27.253 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 22:34:10.552797] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:27.253 [2024-07-15 22:34:10.645907] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:27.253 [2024-07-15 22:34:10.653888] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:27.253 [2024-07-15 22:34:10.661898] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:27.253 [2024-07-15 22:34:10.669887] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:27.253 [2024-07-15 22:34:10.699011] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:27.253 passed 00:10:27.512 Test: admin_create_io_sq_verify_pc ...[2024-07-15 22:34:10.778517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:27.512 [2024-07-15 22:34:10.794902] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:27.512 [2024-07-15 22:34:10.812821] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:27.512 passed 00:10:27.512 Test: admin_create_io_qp_max_qps ...[2024-07-15 22:34:10.897412] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:28.893 [2024-07-15 22:34:12.008895] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:29.152 [2024-07-15 22:34:12.396647] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:29.152 passed 00:10:29.152 Test: admin_create_io_sq_shared_cq ...[2024-07-15 22:34:12.478862] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:29.152 [2024-07-15 22:34:12.612884] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:29.152 [2024-07-15 22:34:12.649961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:29.432 passed 00:10:29.432 00:10:29.432 Run Summary: Type Total Ran Passed Failed Inactive 00:10:29.432 suites 1 1 n/a 0 0 00:10:29.432 tests 18 18 18 0 0 00:10:29.432 asserts 360 360 360 0 n/a 00:10:29.432 00:10:29.432 Elapsed time = 1.564 seconds 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1206889 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@942 -- # '[' -z 1206889 ']' 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # kill -0 1206889 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # uname 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1206889 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1206889' 00:10:29.432 killing process with pid 1206889 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@961 -- # kill 1206889 00:10:29.432 22:34:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # wait 1206889 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:29.709 00:10:29.709 real 0m5.765s 00:10:29.709 user 0m16.100s 00:10:29.709 sys 0m0.553s 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:29.709 ************************************ 00:10:29.709 END TEST nvmf_vfio_user_nvme_compliance 00:10:29.709 ************************************ 00:10:29.709 22:34:13 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:10:29.709 22:34:13 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:29.709 22:34:13 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:29.709 22:34:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:29.709 22:34:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:29.709 ************************************ 00:10:29.709 START TEST nvmf_vfio_user_fuzz 00:10:29.709 ************************************ 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:29.709 * Looking for test storage... 00:10:29.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.709 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1207614 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1207614' 00:10:29.710 Process pid: 1207614 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1207614 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@823 -- # '[' -z 1207614 ']' 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:29.710 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:30.278 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:30.278 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # return 0 00:10:30.278 22:34:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:31.212 malloc0 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:31.212 22:34:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:03.283 Fuzzing completed. Shutting down the fuzz application 00:11:03.283 00:11:03.283 Dumping successful admin opcodes: 00:11:03.283 8, 9, 10, 24, 00:11:03.283 Dumping successful io opcodes: 00:11:03.283 0, 00:11:03.283 NS: 0x200003a1ef00 I/O qp, Total commands completed: 575291, total successful commands: 2215, random_seed: 2045006592 00:11:03.283 NS: 0x200003a1ef00 admin qp, Total commands completed: 73835, total successful commands: 580, random_seed: 939514688 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1207614 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@942 -- # '[' -z 1207614 ']' 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # kill -0 1207614 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # uname 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1207614 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1207614' 00:11:03.283 killing process with pid 1207614 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@961 -- # kill 1207614 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # wait 1207614 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:03.283 00:11:03.283 real 0m32.347s 00:11:03.283 user 0m31.077s 00:11:03.283 sys 0m28.541s 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:03.283 22:34:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:03.283 ************************************ 00:11:03.283 END TEST nvmf_vfio_user_fuzz 00:11:03.283 ************************************ 00:11:03.283 22:34:45 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:03.283 22:34:45 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:03.283 22:34:45 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:03.283 22:34:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:03.283 22:34:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:03.283 ************************************ 00:11:03.283 START TEST nvmf_host_management 00:11:03.283 ************************************ 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:03.283 * Looking for test storage... 00:11:03.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.283 22:34:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:03.284 22:34:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:04.219 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:04.219 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:04.219 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:04.219 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:11:04.219 00:11:04.219 --- 10.0.0.2 ping statistics --- 00:11:04.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.219 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:11:04.219 00:11:04.219 --- 10.0.0.1 ping statistics --- 00:11:04.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.219 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:04.219 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1213064 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1213064 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@823 -- # '[' -z 1213064 ']' 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:04.220 22:34:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.220 [2024-07-15 22:34:47.659590] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:11:04.220 [2024-07-15 22:34:47.659690] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.479 [2024-07-15 22:34:47.730686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.479 [2024-07-15 22:34:47.854317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.479 [2024-07-15 22:34:47.854368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.479 [2024-07-15 22:34:47.854384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.479 [2024-07-15 22:34:47.854398] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.479 [2024-07-15 22:34:47.854410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.479 [2024-07-15 22:34:47.854500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.479 [2024-07-15 22:34:47.854551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.479 [2024-07-15 22:34:47.855037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:04.479 [2024-07-15 22:34:47.855042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # return 0 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 [2024-07-15 22:34:48.618811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 Malloc0 00:11:05.410 [2024-07-15 22:34:48.683962] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1213238 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1213238 /var/tmp/bdevperf.sock 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@823 -- # '[' -z 1213238 ']' 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:05.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:05.410 { 00:11:05.410 "params": { 00:11:05.410 "name": "Nvme$subsystem", 00:11:05.410 "trtype": "$TEST_TRANSPORT", 00:11:05.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:05.410 "adrfam": "ipv4", 00:11:05.410 "trsvcid": "$NVMF_PORT", 00:11:05.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:05.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:05.410 "hdgst": ${hdgst:-false}, 00:11:05.410 "ddgst": ${ddgst:-false} 00:11:05.410 }, 00:11:05.410 "method": "bdev_nvme_attach_controller" 00:11:05.410 } 00:11:05.410 EOF 00:11:05.410 )") 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:05.410 22:34:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:05.410 "params": { 00:11:05.410 "name": "Nvme0", 00:11:05.410 "trtype": "tcp", 00:11:05.410 "traddr": "10.0.0.2", 00:11:05.410 "adrfam": "ipv4", 00:11:05.410 "trsvcid": "4420", 00:11:05.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:05.410 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:05.410 "hdgst": false, 00:11:05.410 "ddgst": false 00:11:05.410 }, 00:11:05.410 "method": "bdev_nvme_attach_controller" 00:11:05.410 }' 00:11:05.410 [2024-07-15 22:34:48.764029] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:11:05.410 [2024-07-15 22:34:48.764119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213238 ] 00:11:05.410 [2024-07-15 22:34:48.824136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.668 [2024-07-15 22:34:48.934567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.926 Running I/O for 10 seconds... 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # return 0 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:11:05.926 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=385 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 385 -ge 100 ']' 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:06.185 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:06.185 [2024-07-15 22:34:49.621384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.621983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.621999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.185 [2024-07-15 22:34:49.622352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.185 [2024-07-15 22:34:49.622367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.622972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.622986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:06.186 [2024-07-15 22:34:49.623363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:06.186 [2024-07-15 22:34:49.623412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.186 [2024-07-15 22:34:49.623426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3900 is same with the state(5) to be set 00:11:06.186 [2024-07-15 22:34:49.623498] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1aa3900 was disconnected and freed. reset controller. 00:11:06.186 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:06.186 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:06.186 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:06.186 [2024-07-15 22:34:49.624667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:06.187 task offset: 57216 on job bdev=Nvme0n1 fails 00:11:06.187 00:11:06.187 Latency(us) 00:11:06.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.187 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:06.187 Job: Nvme0n1 ended in about 0.39 seconds with error 00:11:06.187 Verification LBA range: start 0x0 length 0x400 00:11:06.187 Nvme0n1 : 0.39 997.09 62.32 166.18 0.00 53517.06 6407.96 48739.37 00:11:06.187 =================================================================================================================== 00:11:06.187 Total : 997.09 62.32 166.18 0.00 53517.06 6407.96 48739.37 00:11:06.187 [2024-07-15 22:34:49.626631] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:06.187 [2024-07-15 22:34:49.626661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692790 (9): Bad file descriptor 00:11:06.187 22:34:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:06.187 22:34:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:06.445 [2024-07-15 22:34:49.769084] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1213238 00:11:07.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1213238) - No such process 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:07.378 { 00:11:07.378 "params": { 00:11:07.378 "name": "Nvme$subsystem", 00:11:07.378 "trtype": "$TEST_TRANSPORT", 00:11:07.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:07.378 "adrfam": "ipv4", 00:11:07.378 "trsvcid": "$NVMF_PORT", 00:11:07.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:07.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:07.378 "hdgst": ${hdgst:-false}, 00:11:07.378 "ddgst": ${ddgst:-false} 00:11:07.378 }, 00:11:07.378 "method": "bdev_nvme_attach_controller" 00:11:07.378 } 00:11:07.378 EOF 00:11:07.378 )") 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:07.378 22:34:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:07.378 "params": { 00:11:07.378 "name": "Nvme0", 00:11:07.378 "trtype": "tcp", 00:11:07.378 "traddr": "10.0.0.2", 00:11:07.378 "adrfam": "ipv4", 00:11:07.378 "trsvcid": "4420", 00:11:07.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:07.378 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:07.378 "hdgst": false, 00:11:07.378 "ddgst": false 00:11:07.378 }, 00:11:07.378 "method": "bdev_nvme_attach_controller" 00:11:07.378 }' 00:11:07.378 [2024-07-15 22:34:50.678428] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:11:07.378 [2024-07-15 22:34:50.678504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213514 ] 00:11:07.378 [2024-07-15 22:34:50.738578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.378 [2024-07-15 22:34:50.849798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.946 Running I/O for 1 seconds... 00:11:08.883 00:11:08.883 Latency(us) 00:11:08.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.883 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:08.883 Verification LBA range: start 0x0 length 0x400 00:11:08.883 Nvme0n1 : 1.02 1318.33 82.40 0.00 0.00 47834.65 10048.85 42137.22 00:11:08.883 =================================================================================================================== 00:11:08.883 Total : 1318.33 82.40 0.00 0.00 47834.65 10048.85 42137.22 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:09.142 rmmod nvme_tcp 00:11:09.142 rmmod nvme_fabrics 00:11:09.142 rmmod nvme_keyring 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1213064 ']' 00:11:09.142 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1213064 00:11:09.143 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@942 -- # '[' -z 1213064 ']' 00:11:09.143 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # kill -0 1213064 00:11:09.143 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # uname 00:11:09.143 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:09.143 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1213064 00:11:09.143 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:11:09.143 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:11:09.143 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1213064' 00:11:09.143 killing process with pid 1213064 00:11:09.143 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@961 -- # kill 1213064 00:11:09.143 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # wait 1213064 00:11:09.401 [2024-07-15 22:34:52.842039] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:09.401 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:09.401 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:09.401 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:09.401 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.401 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:09.401 22:34:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.401 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.401 22:34:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.015 22:34:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.015 22:34:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:12.015 00:11:12.015 real 0m9.452s 00:11:12.015 user 0m23.392s 00:11:12.015 sys 0m2.614s 00:11:12.015 22:34:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:12.015 22:34:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:12.015 ************************************ 00:11:12.015 END TEST nvmf_host_management 00:11:12.015 ************************************ 00:11:12.015 22:34:54 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:12.015 22:34:54 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:12.015 22:34:54 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:12.015 22:34:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:12.015 22:34:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:12.015 ************************************ 00:11:12.015 START TEST nvmf_lvol 00:11:12.015 ************************************ 00:11:12.015 22:34:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:12.015 * Looking for test storage... 00:11:12.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:12.015 22:34:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:13.387 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:13.387 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:13.387 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.387 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:13.388 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:13.388 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.646 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.646 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.646 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:13.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:11:13.647 00:11:13.647 --- 10.0.0.2 ping statistics --- 00:11:13.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.647 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:11:13.647 00:11:13.647 --- 10.0.0.1 ping statistics --- 00:11:13.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.647 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1215687 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1215687 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@823 -- # '[' -z 1215687 ']' 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:13.647 22:34:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:13.647 [2024-07-15 22:34:57.009625] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:11:13.647 [2024-07-15 22:34:57.009715] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.647 [2024-07-15 22:34:57.085040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:13.905 [2024-07-15 22:34:57.205545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.905 [2024-07-15 22:34:57.205614] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.905 [2024-07-15 22:34:57.205639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.905 [2024-07-15 22:34:57.205654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.905 [2024-07-15 22:34:57.205665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.905 [2024-07-15 22:34:57.205752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.905 [2024-07-15 22:34:57.205823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.905 [2024-07-15 22:34:57.205826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.905 22:34:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:13.905 22:34:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # return 0 00:11:13.905 22:34:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.905 22:34:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:13.905 22:34:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:13.905 22:34:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.905 22:34:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:14.163 [2024-07-15 22:34:57.589697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.163 22:34:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.420 22:34:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:14.420 22:34:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.989 22:34:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:14.989 22:34:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:15.249 22:34:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:15.506 22:34:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=59b37245-ed4a-499f-8b30-ecce8c45e364 00:11:15.506 22:34:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 59b37245-ed4a-499f-8b30-ecce8c45e364 lvol 20 00:11:15.765 22:34:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bec99c70-5f23-43a1-9eca-b4b160c8dca5 00:11:15.765 22:34:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:16.022 22:34:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bec99c70-5f23-43a1-9eca-b4b160c8dca5 00:11:16.280 22:34:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:16.538 [2024-07-15 22:34:59.826046] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.538 22:34:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:16.795 22:35:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1216017 00:11:16.795 22:35:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:16.795 22:35:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:17.728 22:35:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bec99c70-5f23-43a1-9eca-b4b160c8dca5 MY_SNAPSHOT 00:11:17.986 22:35:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=203bbf92-6e67-4e8f-913f-0d217ceada39 00:11:17.986 22:35:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bec99c70-5f23-43a1-9eca-b4b160c8dca5 30 00:11:18.553 22:35:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 203bbf92-6e67-4e8f-913f-0d217ceada39 MY_CLONE 00:11:18.553 22:35:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=67cce69e-69f4-4bb5-97a2-0cb06ae5ed0a 00:11:18.554 22:35:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 67cce69e-69f4-4bb5-97a2-0cb06ae5ed0a 00:11:19.122 22:35:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1216017 00:11:27.241 Initializing NVMe Controllers 00:11:27.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:27.241 Controller IO queue size 128, less than required. 00:11:27.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:27.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:27.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:27.241 Initialization complete. Launching workers. 00:11:27.241 ======================================================== 00:11:27.241 Latency(us) 00:11:27.241 Device Information : IOPS MiB/s Average min max 00:11:27.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10367.70 40.50 12352.54 1981.19 63127.51 00:11:27.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10399.50 40.62 12312.76 2093.33 65917.46 00:11:27.241 ======================================================== 00:11:27.241 Total : 20767.20 81.12 12332.62 1981.19 65917.46 00:11:27.241 00:11:27.241 22:35:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:27.241 22:35:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bec99c70-5f23-43a1-9eca-b4b160c8dca5 00:11:27.497 22:35:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59b37245-ed4a-499f-8b30-ecce8c45e364 00:11:27.754 22:35:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.755 rmmod nvme_tcp 00:11:27.755 rmmod nvme_fabrics 00:11:27.755 rmmod nvme_keyring 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1215687 ']' 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1215687 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@942 -- # '[' -z 1215687 ']' 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # kill -0 1215687 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # uname 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:27.755 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1215687 00:11:28.014 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:28.014 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:28.014 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1215687' 00:11:28.014 killing process with pid 1215687 00:11:28.014 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@961 -- # kill 1215687 00:11:28.014 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # wait 1215687 00:11:28.274 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:28.274 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:28.274 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:28.274 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.274 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:28.274 22:35:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.274 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.274 22:35:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.180 22:35:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:30.180 00:11:30.180 real 0m18.671s 00:11:30.180 user 1m4.885s 00:11:30.180 sys 0m5.176s 00:11:30.180 22:35:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:30.180 22:35:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:30.180 ************************************ 00:11:30.180 END TEST nvmf_lvol 00:11:30.180 ************************************ 00:11:30.180 22:35:13 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:30.180 22:35:13 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:30.180 22:35:13 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:30.180 22:35:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:30.180 22:35:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.438 ************************************ 00:11:30.438 START TEST nvmf_lvs_grow 00:11:30.438 ************************************ 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:30.438 * Looking for test storage... 00:11:30.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.438 22:35:13 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:30.439 22:35:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:32.409 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:32.409 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.409 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:32.410 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:32.410 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:32.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:11:32.410 00:11:32.410 --- 10.0.0.2 ping statistics --- 00:11:32.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.410 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:11:32.410 00:11:32.410 --- 10.0.0.1 ping statistics --- 00:11:32.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.410 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1219293 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1219293 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # '[' -z 1219293 ']' 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:32.410 22:35:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:32.669 [2024-07-15 22:35:15.943973] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:11:32.669 [2024-07-15 22:35:15.944055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.669 [2024-07-15 22:35:16.014365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.669 [2024-07-15 22:35:16.130950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.669 [2024-07-15 22:35:16.131001] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.669 [2024-07-15 22:35:16.131017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.669 [2024-07-15 22:35:16.131030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.669 [2024-07-15 22:35:16.131042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.669 [2024-07-15 22:35:16.131072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.604 22:35:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:33.604 22:35:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # return 0 00:11:33.604 22:35:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:33.604 22:35:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:33.604 22:35:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:33.604 22:35:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.604 22:35:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:33.863 [2024-07-15 22:35:17.190019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:33.863 ************************************ 00:11:33.863 START TEST lvs_grow_clean 00:11:33.863 ************************************ 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1117 -- # lvs_grow 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:33.863 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:34.121 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:34.121 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:34.380 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:34.380 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:34.380 22:35:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:34.639 22:35:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:34.639 22:35:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:34.639 22:35:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1cec4555-8932-4281-8f56-630dd6b38c0d lvol 150 00:11:34.897 22:35:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5bdefdfd-7493-4beb-bb4f-dc22bce6351a 00:11:34.897 22:35:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:34.898 22:35:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:35.157 [2024-07-15 22:35:18.550195] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:35.157 [2024-07-15 22:35:18.550318] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:35.157 true 00:11:35.157 22:35:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:35.157 22:35:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:35.415 22:35:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:35.415 22:35:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:35.673 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5bdefdfd-7493-4beb-bb4f-dc22bce6351a 00:11:35.931 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:36.191 [2024-07-15 22:35:19.621487] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.191 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:36.450 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1219857 00:11:36.450 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:36.450 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:36.450 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1219857 /var/tmp/bdevperf.sock 00:11:36.450 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@823 -- # '[' -z 1219857 ']' 00:11:36.450 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:36.450 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:36.450 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:36.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:36.450 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:36.450 22:35:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:36.450 [2024-07-15 22:35:19.926953] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:11:36.450 [2024-07-15 22:35:19.927041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219857 ] 00:11:36.710 [2024-07-15 22:35:19.989666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.710 [2024-07-15 22:35:20.109891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.969 22:35:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:36.969 22:35:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # return 0 00:11:36.969 22:35:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:37.227 Nvme0n1 00:11:37.227 22:35:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:37.487 [ 00:11:37.487 { 00:11:37.487 "name": "Nvme0n1", 00:11:37.487 "aliases": [ 00:11:37.487 "5bdefdfd-7493-4beb-bb4f-dc22bce6351a" 00:11:37.487 ], 00:11:37.487 "product_name": "NVMe disk", 00:11:37.487 "block_size": 4096, 00:11:37.487 "num_blocks": 38912, 00:11:37.487 "uuid": "5bdefdfd-7493-4beb-bb4f-dc22bce6351a", 00:11:37.487 "assigned_rate_limits": { 00:11:37.487 "rw_ios_per_sec": 0, 00:11:37.487 "rw_mbytes_per_sec": 0, 00:11:37.487 "r_mbytes_per_sec": 0, 00:11:37.487 "w_mbytes_per_sec": 0 00:11:37.487 }, 00:11:37.487 "claimed": false, 00:11:37.487 "zoned": false, 00:11:37.487 "supported_io_types": { 00:11:37.487 "read": true, 00:11:37.487 "write": true, 00:11:37.487 "unmap": true, 00:11:37.487 "flush": true, 00:11:37.487 "reset": true, 00:11:37.487 "nvme_admin": true, 00:11:37.487 "nvme_io": true, 00:11:37.487 "nvme_io_md": false, 00:11:37.487 "write_zeroes": true, 00:11:37.487 "zcopy": false, 00:11:37.487 "get_zone_info": false, 00:11:37.487 "zone_management": false, 00:11:37.487 "zone_append": false, 00:11:37.487 "compare": true, 00:11:37.487 "compare_and_write": true, 00:11:37.487 "abort": true, 00:11:37.487 "seek_hole": false, 00:11:37.487 "seek_data": false, 00:11:37.487 "copy": true, 00:11:37.487 "nvme_iov_md": false 00:11:37.487 }, 00:11:37.487 "memory_domains": [ 00:11:37.487 { 00:11:37.487 "dma_device_id": "system", 00:11:37.487 "dma_device_type": 1 00:11:37.487 } 00:11:37.487 ], 00:11:37.487 "driver_specific": { 00:11:37.487 "nvme": [ 00:11:37.487 { 00:11:37.487 "trid": { 00:11:37.487 "trtype": "TCP", 00:11:37.487 "adrfam": "IPv4", 00:11:37.487 "traddr": "10.0.0.2", 00:11:37.487 "trsvcid": "4420", 00:11:37.487 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:37.487 }, 00:11:37.487 "ctrlr_data": { 00:11:37.487 "cntlid": 1, 00:11:37.487 "vendor_id": "0x8086", 00:11:37.487 "model_number": "SPDK bdev Controller", 00:11:37.487 "serial_number": "SPDK0", 00:11:37.487 "firmware_revision": "24.09", 00:11:37.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:37.487 "oacs": { 00:11:37.487 "security": 0, 00:11:37.487 "format": 0, 00:11:37.487 "firmware": 0, 00:11:37.487 "ns_manage": 0 00:11:37.487 }, 00:11:37.487 "multi_ctrlr": true, 00:11:37.487 "ana_reporting": false 00:11:37.487 }, 00:11:37.487 "vs": { 00:11:37.487 "nvme_version": "1.3" 00:11:37.487 }, 00:11:37.487 "ns_data": { 00:11:37.487 "id": 1, 00:11:37.487 "can_share": true 00:11:37.487 } 00:11:37.487 } 00:11:37.487 ], 00:11:37.487 "mp_policy": "active_passive" 00:11:37.487 } 00:11:37.487 } 00:11:37.487 ] 00:11:37.487 22:35:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1219995 00:11:37.487 22:35:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:37.487 22:35:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:37.487 Running I/O for 10 seconds... 00:11:38.871 Latency(us) 00:11:38.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.871 Nvme0n1 : 1.00 14296.00 55.84 0.00 0.00 0.00 0.00 0.00 00:11:38.871 =================================================================================================================== 00:11:38.871 Total : 14296.00 55.84 0.00 0.00 0.00 0.00 0.00 00:11:38.871 00:11:39.438 22:35:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:39.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.695 Nvme0n1 : 2.00 14404.00 56.27 0.00 0.00 0.00 0.00 0.00 00:11:39.695 =================================================================================================================== 00:11:39.695 Total : 14404.00 56.27 0.00 0.00 0.00 0.00 0.00 00:11:39.695 00:11:39.695 true 00:11:39.695 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:39.695 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:39.953 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:39.953 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:39.953 22:35:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1219995 00:11:40.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.521 Nvme0n1 : 3.00 14552.00 56.84 0.00 0.00 0.00 0.00 0.00 00:11:40.521 =================================================================================================================== 00:11:40.521 Total : 14552.00 56.84 0.00 0.00 0.00 0.00 0.00 00:11:40.521 00:11:41.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.461 Nvme0n1 : 4.00 14626.00 57.13 0.00 0.00 0.00 0.00 0.00 00:11:41.461 =================================================================================================================== 00:11:41.461 Total : 14626.00 57.13 0.00 0.00 0.00 0.00 0.00 00:11:41.461 00:11:42.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.842 Nvme0n1 : 5.00 14670.20 57.31 0.00 0.00 0.00 0.00 0.00 00:11:42.842 =================================================================================================================== 00:11:42.842 Total : 14670.20 57.31 0.00 0.00 0.00 0.00 0.00 00:11:42.842 00:11:43.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.781 Nvme0n1 : 6.00 14763.83 57.67 0.00 0.00 0.00 0.00 0.00 00:11:43.781 =================================================================================================================== 00:11:43.781 Total : 14763.83 57.67 0.00 0.00 0.00 0.00 0.00 00:11:43.781 00:11:44.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.719 Nvme0n1 : 7.00 14776.00 57.72 0.00 0.00 0.00 0.00 0.00 00:11:44.719 =================================================================================================================== 00:11:44.719 Total : 14776.00 57.72 0.00 0.00 0.00 0.00 0.00 00:11:44.719 00:11:45.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.658 Nvme0n1 : 8.00 14817.00 57.88 0.00 0.00 0.00 0.00 0.00 00:11:45.658 =================================================================================================================== 00:11:45.658 Total : 14817.00 57.88 0.00 0.00 0.00 0.00 0.00 00:11:45.658 00:11:46.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.610 Nvme0n1 : 9.00 14870.11 58.09 0.00 0.00 0.00 0.00 0.00 00:11:46.610 =================================================================================================================== 00:11:46.610 Total : 14870.11 58.09 0.00 0.00 0.00 0.00 0.00 00:11:46.610 00:11:47.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.547 Nvme0n1 : 10.00 14867.90 58.08 0.00 0.00 0.00 0.00 0.00 00:11:47.547 =================================================================================================================== 00:11:47.547 Total : 14867.90 58.08 0.00 0.00 0.00 0.00 0.00 00:11:47.547 00:11:47.547 00:11:47.547 Latency(us) 00:11:47.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.547 Nvme0n1 : 10.00 14867.26 58.08 0.00 0.00 8603.40 5242.88 17670.45 00:11:47.547 =================================================================================================================== 00:11:47.547 Total : 14867.26 58.08 0.00 0.00 8603.40 5242.88 17670.45 00:11:47.547 0 00:11:47.547 22:35:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1219857 00:11:47.547 22:35:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@942 -- # '[' -z 1219857 ']' 00:11:47.547 22:35:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # kill -0 1219857 00:11:47.547 22:35:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # uname 00:11:47.547 22:35:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:47.547 22:35:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1219857 00:11:47.547 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:11:47.547 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:11:47.547 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1219857' 00:11:47.547 killing process with pid 1219857 00:11:47.547 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@961 -- # kill 1219857 00:11:47.547 Received shutdown signal, test time was about 10.000000 seconds 00:11:47.547 00:11:47.547 Latency(us) 00:11:47.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.547 =================================================================================================================== 00:11:47.547 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:47.547 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # wait 1219857 00:11:47.804 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:48.086 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:48.343 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:48.343 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:48.600 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:48.600 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:48.600 22:35:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:48.858 [2024-07-15 22:35:32.229712] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # local es=0 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:48.858 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:49.116 request: 00:11:49.116 { 00:11:49.116 "uuid": "1cec4555-8932-4281-8f56-630dd6b38c0d", 00:11:49.116 "method": "bdev_lvol_get_lvstores", 00:11:49.116 "req_id": 1 00:11:49.116 } 00:11:49.116 Got JSON-RPC error response 00:11:49.116 response: 00:11:49.116 { 00:11:49.116 "code": -19, 00:11:49.116 "message": "No such device" 00:11:49.116 } 00:11:49.116 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # es=1 00:11:49.116 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:49.116 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:49.116 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:49.116 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:49.375 aio_bdev 00:11:49.375 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5bdefdfd-7493-4beb-bb4f-dc22bce6351a 00:11:49.375 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@891 -- # local bdev_name=5bdefdfd-7493-4beb-bb4f-dc22bce6351a 00:11:49.375 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:49.375 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@893 -- # local i 00:11:49.375 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:49.375 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:49.375 22:35:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:49.633 22:35:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5bdefdfd-7493-4beb-bb4f-dc22bce6351a -t 2000 00:11:49.891 [ 00:11:49.891 { 00:11:49.891 "name": "5bdefdfd-7493-4beb-bb4f-dc22bce6351a", 00:11:49.891 "aliases": [ 00:11:49.891 "lvs/lvol" 00:11:49.891 ], 00:11:49.891 "product_name": "Logical Volume", 00:11:49.891 "block_size": 4096, 00:11:49.891 "num_blocks": 38912, 00:11:49.891 "uuid": "5bdefdfd-7493-4beb-bb4f-dc22bce6351a", 00:11:49.891 "assigned_rate_limits": { 00:11:49.891 "rw_ios_per_sec": 0, 00:11:49.891 "rw_mbytes_per_sec": 0, 00:11:49.891 "r_mbytes_per_sec": 0, 00:11:49.891 "w_mbytes_per_sec": 0 00:11:49.891 }, 00:11:49.891 "claimed": false, 00:11:49.891 "zoned": false, 00:11:49.891 "supported_io_types": { 00:11:49.891 "read": true, 00:11:49.891 "write": true, 00:11:49.891 "unmap": true, 00:11:49.891 "flush": false, 00:11:49.891 "reset": true, 00:11:49.891 "nvme_admin": false, 00:11:49.891 "nvme_io": false, 00:11:49.891 "nvme_io_md": false, 00:11:49.891 "write_zeroes": true, 00:11:49.891 "zcopy": false, 00:11:49.891 "get_zone_info": false, 00:11:49.891 "zone_management": false, 00:11:49.891 "zone_append": false, 00:11:49.891 "compare": false, 00:11:49.891 "compare_and_write": false, 00:11:49.891 "abort": false, 00:11:49.891 "seek_hole": true, 00:11:49.891 "seek_data": true, 00:11:49.891 "copy": false, 00:11:49.891 "nvme_iov_md": false 00:11:49.891 }, 00:11:49.891 "driver_specific": { 00:11:49.891 "lvol": { 00:11:49.891 "lvol_store_uuid": "1cec4555-8932-4281-8f56-630dd6b38c0d", 00:11:49.891 "base_bdev": "aio_bdev", 00:11:49.891 "thin_provision": false, 00:11:49.891 "num_allocated_clusters": 38, 00:11:49.891 "snapshot": false, 00:11:49.891 "clone": false, 00:11:49.891 "esnap_clone": false 00:11:49.891 } 00:11:49.891 } 00:11:49.891 } 00:11:49.891 ] 00:11:49.891 22:35:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # return 0 00:11:49.891 22:35:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:49.891 22:35:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:50.150 22:35:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:50.150 22:35:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:50.150 22:35:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:50.408 22:35:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:50.408 22:35:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5bdefdfd-7493-4beb-bb4f-dc22bce6351a 00:11:50.665 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1cec4555-8932-4281-8f56-630dd6b38c0d 00:11:51.233 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:51.233 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:51.233 00:11:51.233 real 0m17.483s 00:11:51.233 user 0m16.936s 00:11:51.233 sys 0m1.943s 00:11:51.233 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:51.233 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:51.233 ************************************ 00:11:51.233 END TEST lvs_grow_clean 00:11:51.233 ************************************ 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1136 -- # return 0 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.492 ************************************ 00:11:51.492 START TEST lvs_grow_dirty 00:11:51.492 ************************************ 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1117 -- # lvs_grow dirty 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:51.492 22:35:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:51.749 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:51.749 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:52.008 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:11:52.008 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:11:52.008 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:52.268 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:52.268 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:52.268 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 lvol 150 00:11:52.528 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1e058adf-5751-4c19-bc26-4b9bfb80cbb4 00:11:52.528 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:52.528 22:35:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:52.795 [2024-07-15 22:35:36.100138] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:52.795 [2024-07-15 22:35:36.100234] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:52.795 true 00:11:52.795 22:35:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:11:52.795 22:35:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:53.054 22:35:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:53.054 22:35:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:53.312 22:35:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1e058adf-5751-4c19-bc26-4b9bfb80cbb4 00:11:53.570 22:35:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:53.827 [2024-07-15 22:35:37.083152] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.827 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:54.085 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1221910 00:11:54.085 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:54.085 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:54.085 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1221910 /var/tmp/bdevperf.sock 00:11:54.085 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@823 -- # '[' -z 1221910 ']' 00:11:54.085 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:54.085 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:54.085 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:54.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:54.085 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:54.085 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:54.086 [2024-07-15 22:35:37.437937] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:11:54.086 [2024-07-15 22:35:37.438022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221910 ] 00:11:54.086 [2024-07-15 22:35:37.499205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.344 [2024-07-15 22:35:37.616436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.344 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:54.344 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # return 0 00:11:54.344 22:35:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:54.602 Nvme0n1 00:11:54.602 22:35:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:54.860 [ 00:11:54.860 { 00:11:54.860 "name": "Nvme0n1", 00:11:54.860 "aliases": [ 00:11:54.860 "1e058adf-5751-4c19-bc26-4b9bfb80cbb4" 00:11:54.860 ], 00:11:54.860 "product_name": "NVMe disk", 00:11:54.860 "block_size": 4096, 00:11:54.860 "num_blocks": 38912, 00:11:54.860 "uuid": "1e058adf-5751-4c19-bc26-4b9bfb80cbb4", 00:11:54.860 "assigned_rate_limits": { 00:11:54.860 "rw_ios_per_sec": 0, 00:11:54.860 "rw_mbytes_per_sec": 0, 00:11:54.860 "r_mbytes_per_sec": 0, 00:11:54.860 "w_mbytes_per_sec": 0 00:11:54.860 }, 00:11:54.860 "claimed": false, 00:11:54.860 "zoned": false, 00:11:54.860 "supported_io_types": { 00:11:54.860 "read": true, 00:11:54.860 "write": true, 00:11:54.860 "unmap": true, 00:11:54.860 "flush": true, 00:11:54.860 "reset": true, 00:11:54.860 "nvme_admin": true, 00:11:54.860 "nvme_io": true, 00:11:54.860 "nvme_io_md": false, 00:11:54.860 "write_zeroes": true, 00:11:54.860 "zcopy": false, 00:11:54.860 "get_zone_info": false, 00:11:54.860 "zone_management": false, 00:11:54.860 "zone_append": false, 00:11:54.860 "compare": true, 00:11:54.860 "compare_and_write": true, 00:11:54.860 "abort": true, 00:11:54.860 "seek_hole": false, 00:11:54.860 "seek_data": false, 00:11:54.860 "copy": true, 00:11:54.860 "nvme_iov_md": false 00:11:54.860 }, 00:11:54.860 "memory_domains": [ 00:11:54.860 { 00:11:54.860 "dma_device_id": "system", 00:11:54.860 "dma_device_type": 1 00:11:54.860 } 00:11:54.860 ], 00:11:54.860 "driver_specific": { 00:11:54.860 "nvme": [ 00:11:54.860 { 00:11:54.860 "trid": { 00:11:54.860 "trtype": "TCP", 00:11:54.860 "adrfam": "IPv4", 00:11:54.860 "traddr": "10.0.0.2", 00:11:54.860 "trsvcid": "4420", 00:11:54.860 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:54.860 }, 00:11:54.860 "ctrlr_data": { 00:11:54.860 "cntlid": 1, 00:11:54.860 "vendor_id": "0x8086", 00:11:54.860 "model_number": "SPDK bdev Controller", 00:11:54.860 "serial_number": "SPDK0", 00:11:54.860 "firmware_revision": "24.09", 00:11:54.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:54.860 "oacs": { 00:11:54.860 "security": 0, 00:11:54.860 "format": 0, 00:11:54.860 "firmware": 0, 00:11:54.860 "ns_manage": 0 00:11:54.860 }, 00:11:54.860 "multi_ctrlr": true, 00:11:54.860 "ana_reporting": false 00:11:54.860 }, 00:11:54.860 "vs": { 00:11:54.860 "nvme_version": "1.3" 00:11:54.860 }, 00:11:54.860 "ns_data": { 00:11:54.860 "id": 1, 00:11:54.860 "can_share": true 00:11:54.860 } 00:11:54.860 } 00:11:54.860 ], 00:11:54.860 "mp_policy": "active_passive" 00:11:54.860 } 00:11:54.860 } 00:11:54.860 ] 00:11:54.860 22:35:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1222047 00:11:54.860 22:35:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:54.860 22:35:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:55.118 Running I/O for 10 seconds... 00:11:56.056 Latency(us) 00:11:56.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.056 Nvme0n1 : 1.00 13293.00 51.93 0.00 0.00 0.00 0.00 0.00 00:11:56.056 =================================================================================================================== 00:11:56.056 Total : 13293.00 51.93 0.00 0.00 0.00 0.00 0.00 00:11:56.056 00:11:56.992 22:35:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:11:56.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.992 Nvme0n1 : 2.00 13530.50 52.85 0.00 0.00 0.00 0.00 0.00 00:11:56.992 =================================================================================================================== 00:11:56.992 Total : 13530.50 52.85 0.00 0.00 0.00 0.00 0.00 00:11:56.992 00:11:57.250 true 00:11:57.250 22:35:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:11:57.250 22:35:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:57.509 22:35:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:57.509 22:35:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:57.509 22:35:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1222047 00:11:58.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.079 Nvme0n1 : 3.00 13631.00 53.25 0.00 0.00 0.00 0.00 0.00 00:11:58.079 =================================================================================================================== 00:11:58.079 Total : 13631.00 53.25 0.00 0.00 0.00 0.00 0.00 00:11:58.079 00:11:59.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.018 Nvme0n1 : 4.00 13657.25 53.35 0.00 0.00 0.00 0.00 0.00 00:11:59.018 =================================================================================================================== 00:11:59.018 Total : 13657.25 53.35 0.00 0.00 0.00 0.00 0.00 00:11:59.018 00:11:59.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.994 Nvme0n1 : 5.00 13687.40 53.47 0.00 0.00 0.00 0.00 0.00 00:11:59.994 =================================================================================================================== 00:11:59.994 Total : 13687.40 53.47 0.00 0.00 0.00 0.00 0.00 00:11:59.994 00:12:01.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.374 Nvme0n1 : 6.00 13752.83 53.72 0.00 0.00 0.00 0.00 0.00 00:12:01.374 =================================================================================================================== 00:12:01.374 Total : 13752.83 53.72 0.00 0.00 0.00 0.00 0.00 00:12:01.374 00:12:02.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.315 Nvme0n1 : 7.00 13781.29 53.83 0.00 0.00 0.00 0.00 0.00 00:12:02.315 =================================================================================================================== 00:12:02.315 Total : 13781.29 53.83 0.00 0.00 0.00 0.00 0.00 00:12:02.315 00:12:03.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.254 Nvme0n1 : 8.00 13815.62 53.97 0.00 0.00 0.00 0.00 0.00 00:12:03.254 =================================================================================================================== 00:12:03.254 Total : 13815.62 53.97 0.00 0.00 0.00 0.00 0.00 00:12:03.254 00:12:04.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.191 Nvme0n1 : 9.00 13829.89 54.02 0.00 0.00 0.00 0.00 0.00 00:12:04.191 =================================================================================================================== 00:12:04.191 Total : 13829.89 54.02 0.00 0.00 0.00 0.00 0.00 00:12:04.191 00:12:05.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.130 Nvme0n1 : 10.00 13848.50 54.10 0.00 0.00 0.00 0.00 0.00 00:12:05.130 =================================================================================================================== 00:12:05.130 Total : 13848.50 54.10 0.00 0.00 0.00 0.00 0.00 00:12:05.130 00:12:05.130 00:12:05.130 Latency(us) 00:12:05.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.130 Nvme0n1 : 10.01 13849.23 54.10 0.00 0.00 9234.31 7039.05 17379.18 00:12:05.130 =================================================================================================================== 00:12:05.130 Total : 13849.23 54.10 0.00 0.00 9234.31 7039.05 17379.18 00:12:05.130 0 00:12:05.130 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1221910 00:12:05.130 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@942 -- # '[' -z 1221910 ']' 00:12:05.130 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # kill -0 1221910 00:12:05.130 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # uname 00:12:05.130 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:05.130 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1221910 00:12:05.130 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:12:05.130 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:12:05.130 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1221910' 00:12:05.130 killing process with pid 1221910 00:12:05.130 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@961 -- # kill 1221910 00:12:05.130 Received shutdown signal, test time was about 10.000000 seconds 00:12:05.130 00:12:05.130 Latency(us) 00:12:05.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.131 =================================================================================================================== 00:12:05.131 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:05.131 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # wait 1221910 00:12:05.389 22:35:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.646 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:05.903 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:12:05.903 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:06.163 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:06.163 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:06.163 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1219293 00:12:06.163 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1219293 00:12:06.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1219293 Killed "${NVMF_APP[@]}" "$@" 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1223374 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1223374 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@823 -- # '[' -z 1223374 ']' 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:06.424 22:35:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:06.424 [2024-07-15 22:35:49.716181] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:06.424 [2024-07-15 22:35:49.716279] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.424 [2024-07-15 22:35:49.782152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.424 [2024-07-15 22:35:49.896556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.424 [2024-07-15 22:35:49.896610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.424 [2024-07-15 22:35:49.896638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.424 [2024-07-15 22:35:49.896650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.424 [2024-07-15 22:35:49.896660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.424 [2024-07-15 22:35:49.896696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.684 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:06.684 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # return 0 00:12:06.684 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.684 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:06.684 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:06.684 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.684 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:06.943 [2024-07-15 22:35:50.320281] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:06.943 [2024-07-15 22:35:50.320434] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:06.943 [2024-07-15 22:35:50.320491] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:06.943 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:06.943 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1e058adf-5751-4c19-bc26-4b9bfb80cbb4 00:12:06.943 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@891 -- # local bdev_name=1e058adf-5751-4c19-bc26-4b9bfb80cbb4 00:12:06.943 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:06.943 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@893 -- # local i 00:12:06.943 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:06.943 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:06.943 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:07.203 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1e058adf-5751-4c19-bc26-4b9bfb80cbb4 -t 2000 00:12:07.463 [ 00:12:07.463 { 00:12:07.463 "name": "1e058adf-5751-4c19-bc26-4b9bfb80cbb4", 00:12:07.463 "aliases": [ 00:12:07.463 "lvs/lvol" 00:12:07.463 ], 00:12:07.463 "product_name": "Logical Volume", 00:12:07.463 "block_size": 4096, 00:12:07.463 "num_blocks": 38912, 00:12:07.463 "uuid": "1e058adf-5751-4c19-bc26-4b9bfb80cbb4", 00:12:07.463 "assigned_rate_limits": { 00:12:07.463 "rw_ios_per_sec": 0, 00:12:07.463 "rw_mbytes_per_sec": 0, 00:12:07.463 "r_mbytes_per_sec": 0, 00:12:07.463 "w_mbytes_per_sec": 0 00:12:07.463 }, 00:12:07.463 "claimed": false, 00:12:07.463 "zoned": false, 00:12:07.463 "supported_io_types": { 00:12:07.463 "read": true, 00:12:07.463 "write": true, 00:12:07.463 "unmap": true, 00:12:07.463 "flush": false, 00:12:07.463 "reset": true, 00:12:07.463 "nvme_admin": false, 00:12:07.463 "nvme_io": false, 00:12:07.463 "nvme_io_md": false, 00:12:07.463 "write_zeroes": true, 00:12:07.463 "zcopy": false, 00:12:07.463 "get_zone_info": false, 00:12:07.463 "zone_management": false, 00:12:07.463 "zone_append": false, 00:12:07.463 "compare": false, 00:12:07.463 "compare_and_write": false, 00:12:07.463 "abort": false, 00:12:07.463 "seek_hole": true, 00:12:07.463 "seek_data": true, 00:12:07.463 "copy": false, 00:12:07.463 "nvme_iov_md": false 00:12:07.463 }, 00:12:07.463 "driver_specific": { 00:12:07.463 "lvol": { 00:12:07.464 "lvol_store_uuid": "a3bb7f39-1a28-4d7a-a2f1-81148a49a851", 00:12:07.464 "base_bdev": "aio_bdev", 00:12:07.464 "thin_provision": false, 00:12:07.464 "num_allocated_clusters": 38, 00:12:07.464 "snapshot": false, 00:12:07.464 "clone": false, 00:12:07.464 "esnap_clone": false 00:12:07.464 } 00:12:07.464 } 00:12:07.464 } 00:12:07.464 ] 00:12:07.464 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # return 0 00:12:07.464 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:12:07.464 22:35:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:07.724 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:07.724 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:12:07.724 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:07.982 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:07.982 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:08.240 [2024-07-15 22:35:51.637262] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # local es=0 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:08.240 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:12:08.499 request: 00:12:08.499 { 00:12:08.499 "uuid": "a3bb7f39-1a28-4d7a-a2f1-81148a49a851", 00:12:08.499 "method": "bdev_lvol_get_lvstores", 00:12:08.499 "req_id": 1 00:12:08.499 } 00:12:08.499 Got JSON-RPC error response 00:12:08.499 response: 00:12:08.499 { 00:12:08.499 "code": -19, 00:12:08.499 "message": "No such device" 00:12:08.499 } 00:12:08.499 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # es=1 00:12:08.499 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:12:08.499 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:12:08.499 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:12:08.499 22:35:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:08.757 aio_bdev 00:12:08.757 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1e058adf-5751-4c19-bc26-4b9bfb80cbb4 00:12:08.757 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@891 -- # local bdev_name=1e058adf-5751-4c19-bc26-4b9bfb80cbb4 00:12:08.757 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:08.757 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@893 -- # local i 00:12:08.757 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:08.757 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:08.757 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:09.322 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1e058adf-5751-4c19-bc26-4b9bfb80cbb4 -t 2000 00:12:09.322 [ 00:12:09.322 { 00:12:09.323 "name": "1e058adf-5751-4c19-bc26-4b9bfb80cbb4", 00:12:09.323 "aliases": [ 00:12:09.323 "lvs/lvol" 00:12:09.323 ], 00:12:09.323 "product_name": "Logical Volume", 00:12:09.323 "block_size": 4096, 00:12:09.323 "num_blocks": 38912, 00:12:09.323 "uuid": "1e058adf-5751-4c19-bc26-4b9bfb80cbb4", 00:12:09.323 "assigned_rate_limits": { 00:12:09.323 "rw_ios_per_sec": 0, 00:12:09.323 "rw_mbytes_per_sec": 0, 00:12:09.323 "r_mbytes_per_sec": 0, 00:12:09.323 "w_mbytes_per_sec": 0 00:12:09.323 }, 00:12:09.323 "claimed": false, 00:12:09.323 "zoned": false, 00:12:09.323 "supported_io_types": { 00:12:09.323 "read": true, 00:12:09.323 "write": true, 00:12:09.323 "unmap": true, 00:12:09.323 "flush": false, 00:12:09.323 "reset": true, 00:12:09.323 "nvme_admin": false, 00:12:09.323 "nvme_io": false, 00:12:09.323 "nvme_io_md": false, 00:12:09.323 "write_zeroes": true, 00:12:09.323 "zcopy": false, 00:12:09.323 "get_zone_info": false, 00:12:09.323 "zone_management": false, 00:12:09.323 "zone_append": false, 00:12:09.323 "compare": false, 00:12:09.323 "compare_and_write": false, 00:12:09.323 "abort": false, 00:12:09.323 "seek_hole": true, 00:12:09.323 "seek_data": true, 00:12:09.323 "copy": false, 00:12:09.323 "nvme_iov_md": false 00:12:09.323 }, 00:12:09.323 "driver_specific": { 00:12:09.323 "lvol": { 00:12:09.323 "lvol_store_uuid": "a3bb7f39-1a28-4d7a-a2f1-81148a49a851", 00:12:09.323 "base_bdev": "aio_bdev", 00:12:09.323 "thin_provision": false, 00:12:09.323 "num_allocated_clusters": 38, 00:12:09.323 "snapshot": false, 00:12:09.323 "clone": false, 00:12:09.323 "esnap_clone": false 00:12:09.323 } 00:12:09.323 } 00:12:09.323 } 00:12:09.323 ] 00:12:09.323 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # return 0 00:12:09.323 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:12:09.323 22:35:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:09.580 22:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:09.580 22:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:12:09.580 22:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:09.838 22:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:09.838 22:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1e058adf-5751-4c19-bc26-4b9bfb80cbb4 00:12:10.096 22:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3bb7f39-1a28-4d7a-a2f1-81148a49a851 00:12:10.354 22:35:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:10.613 22:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:10.613 00:12:10.613 real 0m19.321s 00:12:10.613 user 0m49.028s 00:12:10.613 sys 0m5.016s 00:12:10.613 22:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:10.613 22:35:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:10.613 ************************************ 00:12:10.613 END TEST lvs_grow_dirty 00:12:10.613 ************************************ 00:12:10.871 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1136 -- # return 0 00:12:10.871 22:35:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:10.871 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@800 -- # type=--id 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@801 -- # id=0 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # for n in $shm_files 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:10.872 nvmf_trace.0 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # return 0 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.872 rmmod nvme_tcp 00:12:10.872 rmmod nvme_fabrics 00:12:10.872 rmmod nvme_keyring 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1223374 ']' 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1223374 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@942 -- # '[' -z 1223374 ']' 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # kill -0 1223374 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # uname 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1223374 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1223374' 00:12:10.872 killing process with pid 1223374 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@961 -- # kill 1223374 00:12:10.872 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # wait 1223374 00:12:11.131 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.131 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.131 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.131 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.131 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.131 22:35:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.131 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.131 22:35:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.668 22:35:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.668 00:12:13.668 real 0m42.859s 00:12:13.668 user 1m12.067s 00:12:13.668 sys 0m8.882s 00:12:13.668 22:35:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:13.668 22:35:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:13.668 ************************************ 00:12:13.668 END TEST nvmf_lvs_grow 00:12:13.668 ************************************ 00:12:13.668 22:35:56 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:13.668 22:35:56 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:13.668 22:35:56 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:13.668 22:35:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:13.668 22:35:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.668 ************************************ 00:12:13.668 START TEST nvmf_bdev_io_wait 00:12:13.668 ************************************ 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:13.668 * Looking for test storage... 00:12:13.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:13.668 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.669 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.669 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.669 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:13.669 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:13.669 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.669 22:35:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.599 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:15.600 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:15.600 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:15.600 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:15.600 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:15.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:12:15.600 00:12:15.600 --- 10.0.0.2 ping statistics --- 00:12:15.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.600 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:12:15.600 00:12:15.600 --- 10.0.0.1 ping statistics --- 00:12:15.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.600 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1225898 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1225898 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@823 -- # '[' -z 1225898 ']' 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:15.600 22:35:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.600 [2024-07-15 22:35:58.862440] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:15.600 [2024-07-15 22:35:58.862517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.600 [2024-07-15 22:35:58.932458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.600 [2024-07-15 22:35:59.055968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.600 [2024-07-15 22:35:59.056029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.600 [2024-07-15 22:35:59.056059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.600 [2024-07-15 22:35:59.056071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.600 [2024-07-15 22:35:59.056081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.600 [2024-07-15 22:35:59.057963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.600 [2024-07-15 22:35:59.057992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.600 [2024-07-15 22:35:59.058053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.600 [2024-07-15 22:35:59.058056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # return 0 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.859 [2024-07-15 22:35:59.215906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.859 Malloc0 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.859 [2024-07-15 22:35:59.279645] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1225930 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1225931 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:15.859 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1225934 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:15.860 { 00:12:15.860 "params": { 00:12:15.860 "name": "Nvme$subsystem", 00:12:15.860 "trtype": "$TEST_TRANSPORT", 00:12:15.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:15.860 "adrfam": "ipv4", 00:12:15.860 "trsvcid": "$NVMF_PORT", 00:12:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:15.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:15.860 "hdgst": ${hdgst:-false}, 00:12:15.860 "ddgst": ${ddgst:-false} 00:12:15.860 }, 00:12:15.860 "method": "bdev_nvme_attach_controller" 00:12:15.860 } 00:12:15.860 EOF 00:12:15.860 )") 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1225936 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:15.860 { 00:12:15.860 "params": { 00:12:15.860 "name": "Nvme$subsystem", 00:12:15.860 "trtype": "$TEST_TRANSPORT", 00:12:15.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:15.860 "adrfam": "ipv4", 00:12:15.860 "trsvcid": "$NVMF_PORT", 00:12:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:15.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:15.860 "hdgst": ${hdgst:-false}, 00:12:15.860 "ddgst": ${ddgst:-false} 00:12:15.860 }, 00:12:15.860 "method": "bdev_nvme_attach_controller" 00:12:15.860 } 00:12:15.860 EOF 00:12:15.860 )") 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:15.860 { 00:12:15.860 "params": { 00:12:15.860 "name": "Nvme$subsystem", 00:12:15.860 "trtype": "$TEST_TRANSPORT", 00:12:15.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:15.860 "adrfam": "ipv4", 00:12:15.860 "trsvcid": "$NVMF_PORT", 00:12:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:15.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:15.860 "hdgst": ${hdgst:-false}, 00:12:15.860 "ddgst": ${ddgst:-false} 00:12:15.860 }, 00:12:15.860 "method": "bdev_nvme_attach_controller" 00:12:15.860 } 00:12:15.860 EOF 00:12:15.860 )") 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:15.860 { 00:12:15.860 "params": { 00:12:15.860 "name": "Nvme$subsystem", 00:12:15.860 "trtype": "$TEST_TRANSPORT", 00:12:15.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:15.860 "adrfam": "ipv4", 00:12:15.860 "trsvcid": "$NVMF_PORT", 00:12:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:15.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:15.860 "hdgst": ${hdgst:-false}, 00:12:15.860 "ddgst": ${ddgst:-false} 00:12:15.860 }, 00:12:15.860 "method": "bdev_nvme_attach_controller" 00:12:15.860 } 00:12:15.860 EOF 00:12:15.860 )") 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1225930 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:15.860 "params": { 00:12:15.860 "name": "Nvme1", 00:12:15.860 "trtype": "tcp", 00:12:15.860 "traddr": "10.0.0.2", 00:12:15.860 "adrfam": "ipv4", 00:12:15.860 "trsvcid": "4420", 00:12:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.860 "hdgst": false, 00:12:15.860 "ddgst": false 00:12:15.860 }, 00:12:15.860 "method": "bdev_nvme_attach_controller" 00:12:15.860 }' 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:15.860 "params": { 00:12:15.860 "name": "Nvme1", 00:12:15.860 "trtype": "tcp", 00:12:15.860 "traddr": "10.0.0.2", 00:12:15.860 "adrfam": "ipv4", 00:12:15.860 "trsvcid": "4420", 00:12:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.860 "hdgst": false, 00:12:15.860 "ddgst": false 00:12:15.860 }, 00:12:15.860 "method": "bdev_nvme_attach_controller" 00:12:15.860 }' 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:15.860 "params": { 00:12:15.860 "name": "Nvme1", 00:12:15.860 "trtype": "tcp", 00:12:15.860 "traddr": "10.0.0.2", 00:12:15.860 "adrfam": "ipv4", 00:12:15.860 "trsvcid": "4420", 00:12:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.860 "hdgst": false, 00:12:15.860 "ddgst": false 00:12:15.860 }, 00:12:15.860 "method": "bdev_nvme_attach_controller" 00:12:15.860 }' 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:15.860 22:35:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:15.860 "params": { 00:12:15.860 "name": "Nvme1", 00:12:15.860 "trtype": "tcp", 00:12:15.860 "traddr": "10.0.0.2", 00:12:15.860 "adrfam": "ipv4", 00:12:15.860 "trsvcid": "4420", 00:12:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.860 "hdgst": false, 00:12:15.860 "ddgst": false 00:12:15.860 }, 00:12:15.860 "method": "bdev_nvme_attach_controller" 00:12:15.860 }' 00:12:15.860 [2024-07-15 22:35:59.328484] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:15.860 [2024-07-15 22:35:59.328484] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:15.860 [2024-07-15 22:35:59.328487] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:15.860 [2024-07-15 22:35:59.328579] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 22:35:59.328581] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 22:35:59.328581] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:15.860 --proc-type=auto ] 00:12:15.860 --proc-type=auto ] 00:12:15.860 [2024-07-15 22:35:59.328622] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:15.860 [2024-07-15 22:35:59.328707] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:16.119 [2024-07-15 22:35:59.508532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.119 [2024-07-15 22:35:59.604718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:16.119 [2024-07-15 22:35:59.606959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.377 [2024-07-15 22:35:59.705828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:16.377 [2024-07-15 22:35:59.706642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.377 [2024-07-15 22:35:59.779276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.377 [2024-07-15 22:35:59.806275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:16.377 [2024-07-15 22:35:59.876047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:16.635 Running I/O for 1 seconds... 00:12:16.635 Running I/O for 1 seconds... 00:12:16.635 Running I/O for 1 seconds... 00:12:16.635 Running I/O for 1 seconds... 00:12:17.572 00:12:17.572 Latency(us) 00:12:17.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.572 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:17.572 Nvme1n1 : 1.02 6534.00 25.52 0.00 0.00 19379.65 8835.22 33010.73 00:12:17.572 =================================================================================================================== 00:12:17.572 Total : 6534.00 25.52 0.00 0.00 19379.65 8835.22 33010.73 00:12:17.572 00:12:17.572 Latency(us) 00:12:17.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.572 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:17.572 Nvme1n1 : 1.01 6432.56 25.13 0.00 0.00 19831.90 6019.60 37671.06 00:12:17.572 =================================================================================================================== 00:12:17.572 Total : 6432.56 25.13 0.00 0.00 19831.90 6019.60 37671.06 00:12:17.572 00:12:17.572 Latency(us) 00:12:17.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.572 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:17.572 Nvme1n1 : 1.00 199614.73 779.75 0.00 0.00 638.90 263.96 861.68 00:12:17.572 =================================================================================================================== 00:12:17.572 Total : 199614.73 779.75 0.00 0.00 638.90 263.96 861.68 00:12:17.572 00:12:17.572 Latency(us) 00:12:17.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.572 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:17.573 Nvme1n1 : 1.01 9239.92 36.09 0.00 0.00 13800.33 5558.42 22330.79 00:12:17.573 =================================================================================================================== 00:12:17.573 Total : 9239.92 36.09 0.00 0.00 13800.33 5558.42 22330.79 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1225931 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1225934 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1225936 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.138 rmmod nvme_tcp 00:12:18.138 rmmod nvme_fabrics 00:12:18.138 rmmod nvme_keyring 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1225898 ']' 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1225898 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@942 -- # '[' -z 1225898 ']' 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # kill -0 1225898 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # uname 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1225898 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1225898' 00:12:18.138 killing process with pid 1225898 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@961 -- # kill 1225898 00:12:18.138 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # wait 1225898 00:12:18.396 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:18.396 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:18.396 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:18.396 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.396 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:18.396 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.396 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.396 22:36:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.296 22:36:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:20.296 00:12:20.296 real 0m7.190s 00:12:20.296 user 0m16.609s 00:12:20.296 sys 0m3.436s 00:12:20.296 22:36:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:20.296 22:36:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.296 ************************************ 00:12:20.296 END TEST nvmf_bdev_io_wait 00:12:20.296 ************************************ 00:12:20.555 22:36:03 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:20.555 22:36:03 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:20.555 22:36:03 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:20.555 22:36:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:20.555 22:36:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:20.555 ************************************ 00:12:20.555 START TEST nvmf_queue_depth 00:12:20.555 ************************************ 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:20.555 * Looking for test storage... 00:12:20.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.555 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.556 22:36:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.458 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:22.459 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:22.459 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:22.459 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:22.459 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:22.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:12:22.459 00:12:22.459 --- 10.0.0.2 ping statistics --- 00:12:22.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.459 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:12:22.459 00:12:22.459 --- 10.0.0.1 ping statistics --- 00:12:22.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.459 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:22.459 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1228265 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1228265 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@823 -- # '[' -z 1228265 ']' 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:22.719 22:36:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.719 [2024-07-15 22:36:06.006821] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:22.719 [2024-07-15 22:36:06.006906] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.719 [2024-07-15 22:36:06.071045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.719 [2024-07-15 22:36:06.183960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.719 [2024-07-15 22:36:06.184019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.719 [2024-07-15 22:36:06.184033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.719 [2024-07-15 22:36:06.184045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.719 [2024-07-15 22:36:06.184055] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.719 [2024-07-15 22:36:06.184084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.977 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:22.977 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # return 0 00:12:22.977 22:36:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.977 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:22.977 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.978 [2024-07-15 22:36:06.328767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.978 Malloc0 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.978 [2024-07-15 22:36:06.392871] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1228401 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1228401 /var/tmp/bdevperf.sock 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@823 -- # '[' -z 1228401 ']' 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:22.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:22.978 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.978 [2024-07-15 22:36:06.437743] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:22.978 [2024-07-15 22:36:06.437808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228401 ] 00:12:23.235 [2024-07-15 22:36:06.496386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.235 [2024-07-15 22:36:06.602521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.235 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:23.235 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # return 0 00:12:23.235 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:23.235 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:23.235 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:23.494 NVMe0n1 00:12:23.494 22:36:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:23.494 22:36:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:23.494 Running I/O for 10 seconds... 00:12:35.711 00:12:35.711 Latency(us) 00:12:35.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.711 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:35.711 Verification LBA range: start 0x0 length 0x4000 00:12:35.711 NVMe0n1 : 10.09 8083.11 31.57 0.00 0.00 126038.70 24758.04 86604.61 00:12:35.711 =================================================================================================================== 00:12:35.711 Total : 8083.11 31.57 0.00 0.00 126038.70 24758.04 86604.61 00:12:35.711 0 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1228401 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@942 -- # '[' -z 1228401 ']' 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # kill -0 1228401 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # uname 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1228401 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1228401' 00:12:35.711 killing process with pid 1228401 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill 1228401 00:12:35.711 Received shutdown signal, test time was about 10.000000 seconds 00:12:35.711 00:12:35.711 Latency(us) 00:12:35.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.711 =================================================================================================================== 00:12:35.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # wait 1228401 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.711 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.712 rmmod nvme_tcp 00:12:35.712 rmmod nvme_fabrics 00:12:35.712 rmmod nvme_keyring 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1228265 ']' 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1228265 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@942 -- # '[' -z 1228265 ']' 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # kill -0 1228265 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # uname 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1228265 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1228265' 00:12:35.712 killing process with pid 1228265 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill 1228265 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # wait 1228265 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.712 22:36:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.281 22:36:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.281 00:12:36.281 real 0m15.887s 00:12:36.281 user 0m22.447s 00:12:36.281 sys 0m2.975s 00:12:36.281 22:36:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:36.281 22:36:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.281 ************************************ 00:12:36.281 END TEST nvmf_queue_depth 00:12:36.281 ************************************ 00:12:36.281 22:36:19 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:36.281 22:36:19 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:36.281 22:36:19 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:36.281 22:36:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:36.281 22:36:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.541 ************************************ 00:12:36.541 START TEST nvmf_target_multipath 00:12:36.541 ************************************ 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:36.541 * Looking for test storage... 00:12:36.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.541 22:36:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:38.458 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:38.458 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:38.458 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:38.458 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:12:38.458 00:12:38.458 --- 10.0.0.2 ping statistics --- 00:12:38.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.458 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:12:38.458 00:12:38.458 --- 10.0.0.1 ping statistics --- 00:12:38.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.458 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.458 22:36:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:38.459 22:36:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:38.459 only one NIC for nvmf test 00:12:38.459 22:36:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:38.459 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:38.459 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:38.459 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:38.459 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:38.459 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:38.459 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:38.790 rmmod nvme_tcp 00:12:38.790 rmmod nvme_fabrics 00:12:38.790 rmmod nvme_keyring 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.790 22:36:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.696 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:40.696 22:36:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:40.696 22:36:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:40.696 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:40.696 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:40.696 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:40.696 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:40.696 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:40.696 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:40.697 00:12:40.697 real 0m4.263s 00:12:40.697 user 0m0.762s 00:12:40.697 sys 0m1.488s 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:40.697 22:36:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:40.697 ************************************ 00:12:40.697 END TEST nvmf_target_multipath 00:12:40.697 ************************************ 00:12:40.697 22:36:24 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:40.697 22:36:24 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:40.697 22:36:24 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:40.697 22:36:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:40.697 22:36:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:40.697 ************************************ 00:12:40.697 START TEST nvmf_zcopy 00:12:40.697 ************************************ 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:40.697 * Looking for test storage... 00:12:40.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:40.697 22:36:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:42.600 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:42.600 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:42.600 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:42.601 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:42.601 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.601 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.858 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.858 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:42.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:12:42.859 00:12:42.859 --- 10.0.0.2 ping statistics --- 00:12:42.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.859 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:12:42.859 00:12:42.859 --- 10.0.0.1 ping statistics --- 00:12:42.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.859 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1233967 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1233967 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@823 -- # '[' -z 1233967 ']' 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:42.859 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.859 [2024-07-15 22:36:26.283565] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:42.859 [2024-07-15 22:36:26.283652] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.116 [2024-07-15 22:36:26.360809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.116 [2024-07-15 22:36:26.481051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.116 [2024-07-15 22:36:26.481102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.116 [2024-07-15 22:36:26.481119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.116 [2024-07-15 22:36:26.481133] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.116 [2024-07-15 22:36:26.481144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.116 [2024-07-15 22:36:26.481187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.116 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:43.116 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # return 0 00:12:43.116 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.116 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:43.116 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.374 [2024-07-15 22:36:26.624303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.374 [2024-07-15 22:36:26.640472] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.374 malloc0 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:43.374 { 00:12:43.374 "params": { 00:12:43.374 "name": "Nvme$subsystem", 00:12:43.374 "trtype": "$TEST_TRANSPORT", 00:12:43.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:43.374 "adrfam": "ipv4", 00:12:43.374 "trsvcid": "$NVMF_PORT", 00:12:43.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:43.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:43.374 "hdgst": ${hdgst:-false}, 00:12:43.374 "ddgst": ${ddgst:-false} 00:12:43.374 }, 00:12:43.374 "method": "bdev_nvme_attach_controller" 00:12:43.374 } 00:12:43.374 EOF 00:12:43.374 )") 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:43.374 22:36:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:43.374 "params": { 00:12:43.374 "name": "Nvme1", 00:12:43.374 "trtype": "tcp", 00:12:43.374 "traddr": "10.0.0.2", 00:12:43.374 "adrfam": "ipv4", 00:12:43.374 "trsvcid": "4420", 00:12:43.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:43.374 "hdgst": false, 00:12:43.374 "ddgst": false 00:12:43.374 }, 00:12:43.374 "method": "bdev_nvme_attach_controller" 00:12:43.374 }' 00:12:43.374 [2024-07-15 22:36:26.720077] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:43.374 [2024-07-15 22:36:26.720158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234112 ] 00:12:43.374 [2024-07-15 22:36:26.783471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.631 [2024-07-15 22:36:26.907128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.888 Running I/O for 10 seconds... 00:12:53.872 00:12:53.872 Latency(us) 00:12:53.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.872 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:53.872 Verification LBA range: start 0x0 length 0x1000 00:12:53.872 Nvme1n1 : 10.01 5833.79 45.58 0.00 0.00 21878.73 609.85 32622.36 00:12:53.872 =================================================================================================================== 00:12:53.872 Total : 5833.79 45.58 0.00 0.00 21878.73 609.85 32622.36 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1235307 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:54.132 { 00:12:54.132 "params": { 00:12:54.132 "name": "Nvme$subsystem", 00:12:54.132 "trtype": "$TEST_TRANSPORT", 00:12:54.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:54.132 "adrfam": "ipv4", 00:12:54.132 "trsvcid": "$NVMF_PORT", 00:12:54.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:54.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:54.132 "hdgst": ${hdgst:-false}, 00:12:54.132 "ddgst": ${ddgst:-false} 00:12:54.132 }, 00:12:54.132 "method": "bdev_nvme_attach_controller" 00:12:54.132 } 00:12:54.132 EOF 00:12:54.132 )") 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:54.132 [2024-07-15 22:36:37.462150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.462208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:54.132 22:36:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:54.132 "params": { 00:12:54.132 "name": "Nvme1", 00:12:54.132 "trtype": "tcp", 00:12:54.132 "traddr": "10.0.0.2", 00:12:54.132 "adrfam": "ipv4", 00:12:54.132 "trsvcid": "4420", 00:12:54.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:54.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:54.132 "hdgst": false, 00:12:54.132 "ddgst": false 00:12:54.132 }, 00:12:54.132 "method": "bdev_nvme_attach_controller" 00:12:54.132 }' 00:12:54.132 [2024-07-15 22:36:37.470117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.470144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.478138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.478162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.486152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.486186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.494191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.494211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.499417] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:12:54.132 [2024-07-15 22:36:37.499489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235307 ] 00:12:54.132 [2024-07-15 22:36:37.502211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.502246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.510227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.510247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.518252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.518271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.526288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.526308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.534316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.534340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.542338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.542362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.550358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.550382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.558381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.132 [2024-07-15 22:36:37.558405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.132 [2024-07-15 22:36:37.566401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.133 [2024-07-15 22:36:37.566425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.133 [2024-07-15 22:36:37.566539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.133 [2024-07-15 22:36:37.574450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.133 [2024-07-15 22:36:37.574486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.133 [2024-07-15 22:36:37.582476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.133 [2024-07-15 22:36:37.582514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.133 [2024-07-15 22:36:37.590470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.133 [2024-07-15 22:36:37.590495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.133 [2024-07-15 22:36:37.598494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.133 [2024-07-15 22:36:37.598517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.133 [2024-07-15 22:36:37.606516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.133 [2024-07-15 22:36:37.606539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.133 [2024-07-15 22:36:37.614538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.133 [2024-07-15 22:36:37.614562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.133 [2024-07-15 22:36:37.622562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.133 [2024-07-15 22:36:37.622587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.133 [2024-07-15 22:36:37.630589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.133 [2024-07-15 22:36:37.630616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.638630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.638667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.646631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.646655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.654651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.654675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.662675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.662699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.670696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.670720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.678717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.678740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.686738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.686762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.688422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.392 [2024-07-15 22:36:37.694760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.694783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.702789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.702816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.710825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.710860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.718848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.718892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.726874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.726935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.734905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.734955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.742937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.742971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.750961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.750995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.758956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.758977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.766991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.767020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.775010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.775043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.783032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.783076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.791016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.791036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.799036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.799056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.807077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.807104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.815086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.815109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.823141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.823164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.831131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.831171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.839169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.839193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.847193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.847217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.855210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.855250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.863247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.863272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.871269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.871298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 Running I/O for 5 seconds... 00:12:54.392 [2024-07-15 22:36:37.879293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.879318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.392 [2024-07-15 22:36:37.891735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.392 [2024-07-15 22:36:37.891763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:37.901517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:37.901546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:37.912930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:37.912958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:37.923718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:37.923756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:37.934455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:37.934482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:37.945358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:37.945385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:37.956332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:37.956359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:37.967597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:37.967623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:37.978619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:37.978645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:37.991290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:37.991316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:38.002979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:38.003015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:38.012360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:38.012387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:38.023593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:38.023621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:38.034018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:38.034045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:38.044394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:38.044421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:38.056839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:38.056866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:38.066053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:38.066080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:38.077215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:38.077242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.651 [2024-07-15 22:36:38.089459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.651 [2024-07-15 22:36:38.089486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.652 [2024-07-15 22:36:38.099093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.652 [2024-07-15 22:36:38.099120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.652 [2024-07-15 22:36:38.110145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.652 [2024-07-15 22:36:38.110172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.652 [2024-07-15 22:36:38.120735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.652 [2024-07-15 22:36:38.120761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.652 [2024-07-15 22:36:38.131426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.652 [2024-07-15 22:36:38.131453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.652 [2024-07-15 22:36:38.141194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.652 [2024-07-15 22:36:38.141221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.652 [2024-07-15 22:36:38.151916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.652 [2024-07-15 22:36:38.151942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.910 [2024-07-15 22:36:38.163169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.910 [2024-07-15 22:36:38.163197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.910 [2024-07-15 22:36:38.174212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.910 [2024-07-15 22:36:38.174247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.910 [2024-07-15 22:36:38.184794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.910 [2024-07-15 22:36:38.184820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.910 [2024-07-15 22:36:38.195684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.910 [2024-07-15 22:36:38.195711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.910 [2024-07-15 22:36:38.206534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.910 [2024-07-15 22:36:38.206561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.910 [2024-07-15 22:36:38.217524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.910 [2024-07-15 22:36:38.217551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.910 [2024-07-15 22:36:38.228324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.910 [2024-07-15 22:36:38.228351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.910 [2024-07-15 22:36:38.238698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.910 [2024-07-15 22:36:38.238724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.910 [2024-07-15 22:36:38.249600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.910 [2024-07-15 22:36:38.249628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.260115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.260142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.270734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.270761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.281535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.281562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.292165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.292192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.304760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.304788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.314555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.314582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.325978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.326005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.336677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.336704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.347403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.347429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.359540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.359573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.368909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.368936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.380145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.380171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.392658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.392685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.911 [2024-07-15 22:36:38.402086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.911 [2024-07-15 22:36:38.402113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.170 [2024-07-15 22:36:38.413803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.170 [2024-07-15 22:36:38.413830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.170 [2024-07-15 22:36:38.424504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.170 [2024-07-15 22:36:38.424531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.435097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.435123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.447643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.447671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.457250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.457276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.468398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.468424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.480562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.480589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.490204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.490231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.501664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.501692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.512214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.512241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.522724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.522752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.533322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.533349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.543716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.543743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.553960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.553986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.564596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.564631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.575901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.575928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.586770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.586797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.599127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.599153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.609242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.609268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.620256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.620282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.631097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.631124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.643356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.643383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.652991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.653017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.171 [2024-07-15 22:36:38.664169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.171 [2024-07-15 22:36:38.664195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.674915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.674943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.685365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.685391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.698217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.698244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.707558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.707584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.718674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.718701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.729215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.729242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.741550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.741577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.751007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.751035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.761994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.762020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.772407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.772441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.783280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.783306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.795500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.795526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.805506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.805533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.816144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.816171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.826591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.826617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.839332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.839359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.849064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.849090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.860531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.860558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.873214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.873241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.882477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.882504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.893327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.893353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.903801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.903828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.914294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.914320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.431 [2024-07-15 22:36:38.926412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.431 [2024-07-15 22:36:38.926438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:38.935681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:38.935709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:38.947095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:38.947122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:38.957719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:38.957747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:38.968404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:38.968432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:38.979181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:38.979217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:38.991151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:38.991180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.000273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:39.000301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.011793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:39.011821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.024274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:39.024301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.033994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:39.034021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.045162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:39.045189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.055460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:39.055487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.065949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:39.065977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.076564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:39.076591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.086920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:39.086948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.097460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.711 [2024-07-15 22:36:39.097488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.711 [2024-07-15 22:36:39.108002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.712 [2024-07-15 22:36:39.108033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.712 [2024-07-15 22:36:39.118734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.712 [2024-07-15 22:36:39.118760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.712 [2024-07-15 22:36:39.129020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.712 [2024-07-15 22:36:39.129047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.712 [2024-07-15 22:36:39.139259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.712 [2024-07-15 22:36:39.139286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.712 [2024-07-15 22:36:39.149997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.712 [2024-07-15 22:36:39.150024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.712 [2024-07-15 22:36:39.160623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.712 [2024-07-15 22:36:39.160650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.712 [2024-07-15 22:36:39.171414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.712 [2024-07-15 22:36:39.171441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.712 [2024-07-15 22:36:39.182410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.712 [2024-07-15 22:36:39.182447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.712 [2024-07-15 22:36:39.195081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.712 [2024-07-15 22:36:39.195108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.712 [2024-07-15 22:36:39.204596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.712 [2024-07-15 22:36:39.204623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.215837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.215865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.226073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.226100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.236463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.236490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.246666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.246692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.256731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.256758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.266720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.266747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.276974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.277001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.287654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.287681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.300217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.300244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.312080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.312107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.971 [2024-07-15 22:36:39.320907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.971 [2024-07-15 22:36:39.320950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.331938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.331965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.344175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.344203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.354050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.354077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.365661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.365688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.376031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.376058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.386716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.386744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.397069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.397096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.407313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.407339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.417769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.417799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.430288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.430314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.439713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.439741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.450591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.450619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.461089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.461117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.972 [2024-07-15 22:36:39.471610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.972 [2024-07-15 22:36:39.471637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.482045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.482073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.492541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.492568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.503074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.503101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.513747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.513774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.524862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.524899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.536921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.536948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.546406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.546433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.557571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.557599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.567516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.567544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.578003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.578030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.588681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.588707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.599430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.599457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.610033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.610059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.232 [2024-07-15 22:36:39.620834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.232 [2024-07-15 22:36:39.620874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-15 22:36:39.631178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-15 22:36:39.631204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-15 22:36:39.641635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-15 22:36:39.641662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-15 22:36:39.652316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-15 22:36:39.652345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-15 22:36:39.662915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-15 22:36:39.662942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-15 22:36:39.673657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-15 22:36:39.673689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-15 22:36:39.684301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-15 22:36:39.684328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-15 22:36:39.694755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-15 22:36:39.694782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-15 22:36:39.705415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-15 22:36:39.705442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-15 22:36:39.716224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-15 22:36:39.716251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.233 [2024-07-15 22:36:39.727117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.233 [2024-07-15 22:36:39.727144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.738056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.738084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.749002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.749028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.761559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.761586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.771171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.771197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.782444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.782471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.793101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.793128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.803973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.804000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.815198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.815225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.825945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.825971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.836509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.836535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.846770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.846797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.857416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.857443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.867541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.867567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.878338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.878379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.889271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.889298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.899980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.900006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.912066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.912093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.921668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.921695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.932510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.932537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.942755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.942784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.953173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.953200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.963844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.963871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.974800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.974826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.493 [2024-07-15 22:36:39.985492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.493 [2024-07-15 22:36:39.985525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:39.996087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:39.996114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.006371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.006411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.017330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.017359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.029267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.029295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.038453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.038485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.049527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.049555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.062197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.062224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.072025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.072052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.082898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.082928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.095649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.095677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.105708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.105735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.116690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.116718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.126974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.127002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.137431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.137458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.147739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.147766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.158309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.158336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.168738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.168766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.179228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.179267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.189670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.189709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.200176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.200204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.211038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.211065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.221548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.221575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.232468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.232495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.243368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.243395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.764 [2024-07-15 22:36:40.254640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.764 [2024-07-15 22:36:40.254668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.265536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.265564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.276350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.276377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.287215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.287242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.297924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.297951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.308484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.308511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.321303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.321331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.331149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.331176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.341806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.341832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.352444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.352472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.363641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.363669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.376252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.376279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.386730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.386756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.397564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.397600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.409680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.409707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.419371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.419398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.430692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.430719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.440983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.441010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.451603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.451629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.461783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.461810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.471940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.471967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.482370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.482397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.492855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.492897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.503144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.503171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.513391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.513418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.026 [2024-07-15 22:36:40.524419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.026 [2024-07-15 22:36:40.524446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.536818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.536846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.546528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.546557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.557783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.557810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.568177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.568205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.579158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.579185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.589741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.589768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.600449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.600485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.612728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.612755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.621964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.621991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.632903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.632942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.643191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.643218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.285 [2024-07-15 22:36:40.653722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.285 [2024-07-15 22:36:40.653749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.664601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.664628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.674979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.675006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.685746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.685773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.696290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.696317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.706949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.706976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.717666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.717693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.730557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.730584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.740234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.740261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.751549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.751576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.762311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.762338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.773013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.773040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.286 [2024-07-15 22:36:40.783479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.286 [2024-07-15 22:36:40.783506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.545 [2024-07-15 22:36:40.794397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.545 [2024-07-15 22:36:40.794426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.545 [2024-07-15 22:36:40.805139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.545 [2024-07-15 22:36:40.805179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.545 [2024-07-15 22:36:40.816164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.816191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.826816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.826843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.837617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.837644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.850380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.850407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.859999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.860026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.871020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.871047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.881278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.881305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.891615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.891642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.902155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.902182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.912973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.913002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.923478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.923505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.934021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.934049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.944466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.944493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.955180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.955207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.967558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.967585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.977009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.977036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.988239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.988266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:40.998996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:40.999023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:41.009794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:41.009821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:41.022837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:41.022864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:41.034378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:41.034405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.546 [2024-07-15 22:36:41.043940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.546 [2024-07-15 22:36:41.043967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.055782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.055811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.067045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.067072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.077703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.077731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.087805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.087832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.099198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.099226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.109480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.109507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.120415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.120442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.132628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.132655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.141906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.141933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.153040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.153068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.163853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.163891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.174125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.174158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.184634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.184663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.195506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.195533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.206067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.206094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.216728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.216755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.227429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.227455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.237932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.237959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.250523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.250552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.260068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.260095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.270985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.271013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.281966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.281993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.292149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.292177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.804 [2024-07-15 22:36:41.302435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.804 [2024-07-15 22:36:41.302462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.315305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.315332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.324621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.324648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.335446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.335473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.345861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.345896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.356414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.356441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.368786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.368814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.378592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.378620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.389830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.389857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.400148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.400176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.410634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.410661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.421292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.421319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.431921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.431953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.442374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.442401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.452664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.452690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.464899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.464946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.473661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.473687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.485818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.485845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.495638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.495665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.505829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.505855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.516403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.516430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.526598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.526624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.537131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.537158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.547264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.547290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.063 [2024-07-15 22:36:41.557797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.063 [2024-07-15 22:36:41.557823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.568284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.568312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.579142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.579169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.591194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.591222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.600902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.600928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.611687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.611724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.621779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.621806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.632172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.632204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.644304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.644331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.653578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.653605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.664864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.664897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.675296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.675323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.685658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.685686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.698421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.698448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.708508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.708534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.719709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.719736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.730440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.730466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.741085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.741112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.751725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.751752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.762195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.762222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.772990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.773017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.783699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.783727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.795891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.795917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.805508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.805534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.323 [2024-07-15 22:36:41.816629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.323 [2024-07-15 22:36:41.816666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.827496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.827524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.838087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.838114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.848769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.848796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.859204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.859231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.870082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.870109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.880721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.880747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.893161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.893187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.902321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.902347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.913596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.913622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.924101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.924127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.934414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.934440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.944722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.944749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.955384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.955411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.967638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.967665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.977064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.977091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.988193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.988220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:41.999177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:41.999205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.009334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.009361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.019653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.019689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.030000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.030027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.040651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.040678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.052830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.052857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.062094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.062121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.073186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.073213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.083333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.083360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.093618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.093645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.103850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.103885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.114057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.114083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.656 [2024-07-15 22:36:42.124326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.656 [2024-07-15 22:36:42.124352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.134772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.134801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.145575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.145602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.156117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.156145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.166806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.166833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.180899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.180927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.191273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.191299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.202042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.202069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.214458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.214484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.223758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.223797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.235357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.235384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.246272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.246299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.256995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.257022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.269555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.269582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.279285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.279312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.290466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.290493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.301125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.301152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.313534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.313560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.323008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.323034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.333773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.333799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.344689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.344716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.355422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.355448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.366350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.366376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.377244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.377271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.388066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.388093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.398727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.398754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.915 [2024-07-15 22:36:42.411431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.915 [2024-07-15 22:36:42.411459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.420850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.420890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.431965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.432000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.442625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.442652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.453147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.453174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.463515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.463542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.474258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.474285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.486945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.486973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.496688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.496715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.507995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.508022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.520279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.520306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.529847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.529874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.541164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.541191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.551630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.551657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.562120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.562147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.572845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.572872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.583331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.583358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.594051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.594077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.604603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.604630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.615383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.615409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.626041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.626068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.638615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.638642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.647813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.647839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.659009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.659036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.173 [2024-07-15 22:36:42.669261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.173 [2024-07-15 22:36:42.669287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.679541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.679569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.689699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.689725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.700251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.700278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.712076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.712103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.721441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.721467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.732155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.732182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.743937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.743963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.753057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.753084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.763873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.763908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.776165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.776197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.785741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.785767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.796434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.796460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.806401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.806427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.816665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.816691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.826806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.826832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.836759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.836787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.846992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.847019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.857454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.857481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.867932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.867959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.880162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.880189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.891490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.891517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.899667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.899693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 00:12:59.433 Latency(us) 00:12:59.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.433 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:59.433 Nvme1n1 : 5.01 12007.76 93.81 0.00 0.00 10646.47 4684.61 20874.43 00:12:59.433 =================================================================================================================== 00:12:59.433 Total : 12007.76 93.81 0.00 0.00 10646.47 4684.61 20874.43 00:12:59.433 [2024-07-15 22:36:42.906577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.906599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.914593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.914615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.922650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.922674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.433 [2024-07-15 22:36:42.930696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.433 [2024-07-15 22:36:42.930742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.693 [2024-07-15 22:36:42.938712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:42.938755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:42.946736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:42.946780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:42.954750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:42.954795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:42.962769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:42.962811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:42.970803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:42.970844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:42.978821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:42.978865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:42.986839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:42.986889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:42.994864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:42.994914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.002904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.002952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.010922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.010966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.018940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.018981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.026970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.027016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.034979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.035022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.042999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.043038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.050996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.051019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.059003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.059024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.067024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.067046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.075045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.075066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.083104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.083145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.091135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.091174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.099168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.099210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.107141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.107180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.115157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.115192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.123190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.123227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.131211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.131232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.139266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.139310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.147287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.147330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.155304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.155341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.163292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.163312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 [2024-07-15 22:36:43.171314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.694 [2024-07-15 22:36:43.171334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1235307) - No such process 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1235307 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.694 delay0 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:59.694 22:36:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.953 22:36:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:59.953 22:36:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:59.953 [2024-07-15 22:36:43.290552] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:06.531 Initializing NVMe Controllers 00:13:06.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:06.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:06.531 Initialization complete. Launching workers. 00:13:06.531 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 68 00:13:06.531 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 355, failed to submit 33 00:13:06.531 success 125, unsuccess 230, failed 0 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:06.531 rmmod nvme_tcp 00:13:06.531 rmmod nvme_fabrics 00:13:06.531 rmmod nvme_keyring 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1233967 ']' 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1233967 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@942 -- # '[' -z 1233967 ']' 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # kill -0 1233967 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # uname 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1233967 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1233967' 00:13:06.531 killing process with pid 1233967 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@961 -- # kill 1233967 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # wait 1233967 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.531 22:36:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.437 22:36:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.437 00:13:08.437 real 0m27.794s 00:13:08.437 user 0m41.292s 00:13:08.437 sys 0m8.141s 00:13:08.437 22:36:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:08.437 22:36:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.437 ************************************ 00:13:08.437 END TEST nvmf_zcopy 00:13:08.437 ************************************ 00:13:08.437 22:36:51 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:08.437 22:36:51 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:08.437 22:36:51 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:08.437 22:36:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:08.437 22:36:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.437 ************************************ 00:13:08.437 START TEST nvmf_nmic 00:13:08.437 ************************************ 00:13:08.437 22:36:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:08.696 * Looking for test storage... 00:13:08.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.696 22:36:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:10.596 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.596 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:10.596 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:10.597 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:10.597 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.597 22:36:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.597 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:13:10.855 00:13:10.855 --- 10.0.0.2 ping statistics --- 00:13:10.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.855 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:13:10.855 00:13:10.855 --- 10.0.0.1 ping statistics --- 00:13:10.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.855 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1238679 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1238679 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@823 -- # '[' -z 1238679 ']' 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:10.855 22:36:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.855 [2024-07-15 22:36:54.201174] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:13:10.855 [2024-07-15 22:36:54.201262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.855 [2024-07-15 22:36:54.269511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.114 [2024-07-15 22:36:54.394578] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.114 [2024-07-15 22:36:54.394635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.114 [2024-07-15 22:36:54.394652] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.114 [2024-07-15 22:36:54.394666] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.114 [2024-07-15 22:36:54.394678] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.114 [2024-07-15 22:36:54.394757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.114 [2024-07-15 22:36:54.394791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.114 [2024-07-15 22:36:54.394845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.114 [2024-07-15 22:36:54.394847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.680 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:11.680 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # return 0 00:13:11.680 22:36:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.680 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:11.680 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.680 22:36:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.680 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.680 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.680 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.680 [2024-07-15 22:36:55.175047] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.939 Malloc0 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.939 [2024-07-15 22:36:55.225934] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:11.939 test case1: single bdev can't be used in multiple subsystems 00:13:11.939 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.940 [2024-07-15 22:36:55.249797] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:11.940 [2024-07-15 22:36:55.249825] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:11.940 [2024-07-15 22:36:55.249839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.940 request: 00:13:11.940 { 00:13:11.940 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:11.940 "namespace": { 00:13:11.940 "bdev_name": "Malloc0", 00:13:11.940 "no_auto_visible": false 00:13:11.940 }, 00:13:11.940 "method": "nvmf_subsystem_add_ns", 00:13:11.940 "req_id": 1 00:13:11.940 } 00:13:11.940 Got JSON-RPC error response 00:13:11.940 response: 00:13:11.940 { 00:13:11.940 "code": -32602, 00:13:11.940 "message": "Invalid parameters" 00:13:11.940 } 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:11.940 Adding namespace failed - expected result. 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:11.940 test case2: host connect to nvmf target in multiple paths 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.940 [2024-07-15 22:36:55.257914] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.940 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.510 22:36:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:13.079 22:36:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.079 22:36:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1192 -- # local i=0 00:13:13.079 22:36:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.079 22:36:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:13:13.079 22:36:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # sleep 2 00:13:14.996 22:36:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:13:14.996 22:36:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:13:14.996 22:36:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.996 22:36:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:13:14.996 22:36:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.996 22:36:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # return 0 00:13:14.996 22:36:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:14.996 [global] 00:13:14.996 thread=1 00:13:14.996 invalidate=1 00:13:14.996 rw=write 00:13:14.996 time_based=1 00:13:14.996 runtime=1 00:13:14.996 ioengine=libaio 00:13:14.996 direct=1 00:13:14.996 bs=4096 00:13:14.996 iodepth=1 00:13:14.996 norandommap=0 00:13:14.996 numjobs=1 00:13:14.996 00:13:14.996 verify_dump=1 00:13:14.996 verify_backlog=512 00:13:14.996 verify_state_save=0 00:13:14.996 do_verify=1 00:13:14.996 verify=crc32c-intel 00:13:14.996 [job0] 00:13:14.996 filename=/dev/nvme0n1 00:13:15.255 Could not set queue depth (nvme0n1) 00:13:15.255 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:15.255 fio-3.35 00:13:15.255 Starting 1 thread 00:13:16.635 00:13:16.635 job0: (groupid=0, jobs=1): err= 0: pid=1239325: Mon Jul 15 22:36:59 2024 00:13:16.635 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:16.635 slat (nsec): min=5254, max=33696, avg=9032.79, stdev=4933.00 00:13:16.635 clat (usec): min=295, max=592, avg=343.74, stdev=41.79 00:13:16.635 lat (usec): min=301, max=598, avg=352.77, stdev=43.35 00:13:16.635 clat percentiles (usec): 00:13:16.635 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:13:16.635 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 338], 00:13:16.635 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 424], 00:13:16.635 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 570], 99.95th=[ 594], 00:13:16.635 | 99.99th=[ 594] 00:13:16.635 write: IOPS=1834, BW=7337KiB/s (7513kB/s)(7344KiB/1001msec); 0 zone resets 00:13:16.635 slat (nsec): min=6860, max=57620, avg=13060.95, stdev=7864.60 00:13:16.635 clat (usec): min=184, max=803, avg=230.56, stdev=40.85 00:13:16.635 lat (usec): min=191, max=825, avg=243.62, stdev=45.80 00:13:16.635 clat percentiles (usec): 00:13:16.635 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 200], 00:13:16.635 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 229], 00:13:16.635 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 293], 00:13:16.635 | 99.00th=[ 400], 99.50th=[ 424], 99.90th=[ 490], 99.95th=[ 807], 00:13:16.635 | 99.99th=[ 807] 00:13:16.635 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:13:16.635 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:16.635 lat (usec) : 250=42.50%, 500=56.38%, 750=1.10%, 1000=0.03% 00:13:16.635 cpu : usr=3.70%, sys=4.40%, ctx=3372, majf=0, minf=2 00:13:16.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.635 issued rwts: total=1536,1836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.635 00:13:16.635 Run status group 0 (all jobs): 00:13:16.635 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:13:16.635 WRITE: bw=7337KiB/s (7513kB/s), 7337KiB/s-7337KiB/s (7513kB/s-7513kB/s), io=7344KiB (7520kB), run=1001-1001msec 00:13:16.635 00:13:16.635 Disk stats (read/write): 00:13:16.635 nvme0n1: ios=1554/1536, merge=0/0, ticks=620/307, in_queue=927, util=96.39% 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1213 -- # local i=0 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1225 -- # return 0 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.635 22:36:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.635 rmmod nvme_tcp 00:13:16.635 rmmod nvme_fabrics 00:13:16.635 rmmod nvme_keyring 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1238679 ']' 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1238679 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@942 -- # '[' -z 1238679 ']' 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # kill -0 1238679 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # uname 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1238679 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1238679' 00:13:16.635 killing process with pid 1238679 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@961 -- # kill 1238679 00:13:16.635 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # wait 1238679 00:13:16.894 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.894 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.894 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.894 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.894 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.894 22:37:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.894 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.894 22:37:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.466 22:37:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:19.466 00:13:19.466 real 0m10.474s 00:13:19.466 user 0m24.670s 00:13:19.466 sys 0m2.365s 00:13:19.466 22:37:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:19.466 22:37:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.466 ************************************ 00:13:19.466 END TEST nvmf_nmic 00:13:19.466 ************************************ 00:13:19.466 22:37:02 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:19.466 22:37:02 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:19.466 22:37:02 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:19.466 22:37:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:19.466 22:37:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.466 ************************************ 00:13:19.466 START TEST nvmf_fio_target 00:13:19.466 ************************************ 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:19.466 * Looking for test storage... 00:13:19.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.466 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.467 22:37:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:21.371 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:21.371 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:21.371 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.371 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:21.372 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:13:21.372 00:13:21.372 --- 10.0.0.2 ping statistics --- 00:13:21.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.372 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:13:21.372 00:13:21.372 --- 10.0.0.1 ping statistics --- 00:13:21.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.372 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1241398 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1241398 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@823 -- # '[' -z 1241398 ']' 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:21.372 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 [2024-07-15 22:37:04.658053] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:13:21.372 [2024-07-15 22:37:04.658136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.372 [2024-07-15 22:37:04.722127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.372 [2024-07-15 22:37:04.831868] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.372 [2024-07-15 22:37:04.831933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.372 [2024-07-15 22:37:04.831963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.372 [2024-07-15 22:37:04.831975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.372 [2024-07-15 22:37:04.831984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.372 [2024-07-15 22:37:04.832051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.372 [2024-07-15 22:37:04.832108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.372 [2024-07-15 22:37:04.832110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.372 [2024-07-15 22:37:04.832080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.631 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:21.631 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # return 0 00:13:21.631 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.631 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.631 22:37:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.631 22:37:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.631 22:37:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:21.889 [2024-07-15 22:37:05.251633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.889 22:37:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.146 22:37:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:22.146 22:37:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.403 22:37:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:22.403 22:37:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.660 22:37:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:22.660 22:37:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.917 22:37:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:22.917 22:37:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:23.174 22:37:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.431 22:37:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:23.431 22:37:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.688 22:37:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:23.688 22:37:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.945 22:37:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:23.945 22:37:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:24.202 22:37:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.459 22:37:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:24.459 22:37:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.716 22:37:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:24.716 22:37:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.973 22:37:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.232 [2024-07-15 22:37:08.606016] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.232 22:37:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:25.488 22:37:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:25.746 22:37:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.313 22:37:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:26.313 22:37:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1192 -- # local i=0 00:13:26.313 22:37:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.313 22:37:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # [[ -n 4 ]] 00:13:26.313 22:37:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # nvme_device_counter=4 00:13:26.313 22:37:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # sleep 2 00:13:28.842 22:37:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:13:28.842 22:37:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:13:28.842 22:37:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.842 22:37:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_devices=4 00:13:28.842 22:37:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.842 22:37:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # return 0 00:13:28.842 22:37:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:28.842 [global] 00:13:28.842 thread=1 00:13:28.842 invalidate=1 00:13:28.842 rw=write 00:13:28.842 time_based=1 00:13:28.842 runtime=1 00:13:28.842 ioengine=libaio 00:13:28.842 direct=1 00:13:28.842 bs=4096 00:13:28.842 iodepth=1 00:13:28.842 norandommap=0 00:13:28.842 numjobs=1 00:13:28.842 00:13:28.842 verify_dump=1 00:13:28.842 verify_backlog=512 00:13:28.842 verify_state_save=0 00:13:28.842 do_verify=1 00:13:28.842 verify=crc32c-intel 00:13:28.842 [job0] 00:13:28.842 filename=/dev/nvme0n1 00:13:28.842 [job1] 00:13:28.842 filename=/dev/nvme0n2 00:13:28.842 [job2] 00:13:28.842 filename=/dev/nvme0n3 00:13:28.842 [job3] 00:13:28.842 filename=/dev/nvme0n4 00:13:28.842 Could not set queue depth (nvme0n1) 00:13:28.842 Could not set queue depth (nvme0n2) 00:13:28.842 Could not set queue depth (nvme0n3) 00:13:28.842 Could not set queue depth (nvme0n4) 00:13:28.842 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:28.842 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:28.842 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:28.842 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:28.842 fio-3.35 00:13:28.842 Starting 4 threads 00:13:29.777 00:13:29.777 job0: (groupid=0, jobs=1): err= 0: pid=1242372: Mon Jul 15 22:37:13 2024 00:13:29.777 read: IOPS=1296, BW=5187KiB/s (5311kB/s)(5192KiB/1001msec) 00:13:29.777 slat (nsec): min=6219, max=62687, avg=15257.81, stdev=7414.60 00:13:29.777 clat (usec): min=300, max=3283, avg=404.77, stdev=101.61 00:13:29.777 lat (usec): min=308, max=3295, avg=420.02, stdev=103.19 00:13:29.777 clat percentiles (usec): 00:13:29.777 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 343], 00:13:29.777 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 400], 60.00th=[ 424], 00:13:29.777 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 478], 95.00th=[ 502], 00:13:29.777 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 644], 99.95th=[ 3294], 00:13:29.777 | 99.99th=[ 3294] 00:13:29.777 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:29.777 slat (nsec): min=6495, max=68818, avg=16379.49, stdev=9934.24 00:13:29.777 clat (usec): min=189, max=482, avg=271.51, stdev=68.04 00:13:29.777 lat (usec): min=198, max=497, avg=287.89, stdev=71.66 00:13:29.777 clat percentiles (usec): 00:13:29.777 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 206], 00:13:29.777 | 30.00th=[ 217], 40.00th=[ 233], 50.00th=[ 262], 60.00th=[ 277], 00:13:29.777 | 70.00th=[ 289], 80.00th=[ 334], 90.00th=[ 388], 95.00th=[ 404], 00:13:29.777 | 99.00th=[ 433], 99.50th=[ 449], 99.90th=[ 465], 99.95th=[ 482], 00:13:29.777 | 99.99th=[ 482] 00:13:29.777 bw ( KiB/s): min= 8192, max= 8192, per=55.06%, avg=8192.00, stdev= 0.00, samples=1 00:13:29.777 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:29.777 lat (usec) : 250=24.74%, 500=72.87%, 750=2.36% 00:13:29.777 lat (msec) : 4=0.04% 00:13:29.777 cpu : usr=3.20%, sys=5.10%, ctx=2834, majf=0, minf=1 00:13:29.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.777 issued rwts: total=1298,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.777 job1: (groupid=0, jobs=1): err= 0: pid=1242391: Mon Jul 15 22:37:13 2024 00:13:29.777 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:29.777 slat (nsec): min=5776, max=45991, avg=14857.25, stdev=6267.10 00:13:29.777 clat (usec): min=298, max=41032, avg=637.82, stdev=2962.09 00:13:29.777 lat (usec): min=313, max=41065, avg=652.68, stdev=2962.84 00:13:29.777 clat percentiles (usec): 00:13:29.777 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 343], 00:13:29.777 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 404], 60.00th=[ 441], 00:13:29.777 | 70.00th=[ 465], 80.00th=[ 478], 90.00th=[ 502], 95.00th=[ 529], 00:13:29.777 | 99.00th=[ 627], 99.50th=[28967], 99.90th=[41157], 99.95th=[41157], 00:13:29.777 | 99.99th=[41157] 00:13:29.777 write: IOPS=1236, BW=4947KiB/s (5066kB/s)(4952KiB/1001msec); 0 zone resets 00:13:29.777 slat (nsec): min=6221, max=74534, avg=14834.60, stdev=9467.47 00:13:29.777 clat (usec): min=184, max=514, avg=245.42, stdev=50.17 00:13:29.777 lat (usec): min=192, max=527, avg=260.25, stdev=54.51 00:13:29.777 clat percentiles (usec): 00:13:29.777 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 206], 00:13:29.777 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 237], 00:13:29.777 | 70.00th=[ 255], 80.00th=[ 281], 90.00th=[ 330], 95.00th=[ 347], 00:13:29.777 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 474], 99.95th=[ 515], 00:13:29.777 | 99.99th=[ 515] 00:13:29.777 bw ( KiB/s): min= 4096, max= 4096, per=27.53%, avg=4096.00, stdev= 0.00, samples=1 00:13:29.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:29.777 lat (usec) : 250=37.71%, 500=57.21%, 750=4.77% 00:13:29.777 lat (msec) : 2=0.04%, 50=0.27% 00:13:29.777 cpu : usr=3.20%, sys=2.80%, ctx=2263, majf=0, minf=1 00:13:29.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.777 issued rwts: total=1024,1238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.777 job2: (groupid=0, jobs=1): err= 0: pid=1242430: Mon Jul 15 22:37:13 2024 00:13:29.777 read: IOPS=20, BW=82.3KiB/s (84.2kB/s)(84.0KiB/1021msec) 00:13:29.777 slat (nsec): min=12633, max=33923, avg=20314.76, stdev=8857.40 00:13:29.777 clat (usec): min=40851, max=41467, avg=40998.18, stdev=123.12 00:13:29.777 lat (usec): min=40885, max=41486, avg=41018.50, stdev=121.06 00:13:29.777 clat percentiles (usec): 00:13:29.777 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:29.777 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:29.777 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:29.777 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:29.777 | 99.99th=[41681] 00:13:29.777 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:13:29.777 slat (nsec): min=7471, max=73078, avg=23456.24, stdev=11893.05 00:13:29.777 clat (usec): min=207, max=427, avg=282.30, stdev=49.32 00:13:29.777 lat (usec): min=222, max=453, avg=305.76, stdev=50.88 00:13:29.777 clat percentiles (usec): 00:13:29.777 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 237], 00:13:29.777 | 30.00th=[ 245], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 285], 00:13:29.777 | 70.00th=[ 310], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 375], 00:13:29.777 | 99.00th=[ 412], 99.50th=[ 412], 99.90th=[ 429], 99.95th=[ 429], 00:13:29.777 | 99.99th=[ 429] 00:13:29.777 bw ( KiB/s): min= 4096, max= 4096, per=27.53%, avg=4096.00, stdev= 0.00, samples=1 00:13:29.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:29.777 lat (usec) : 250=33.77%, 500=62.29% 00:13:29.777 lat (msec) : 50=3.94% 00:13:29.777 cpu : usr=0.29%, sys=1.37%, ctx=534, majf=0, minf=1 00:13:29.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.777 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.777 job3: (groupid=0, jobs=1): err= 0: pid=1242443: Mon Jul 15 22:37:13 2024 00:13:29.777 read: IOPS=74, BW=300KiB/s (307kB/s)(300KiB/1001msec) 00:13:29.777 slat (nsec): min=7667, max=33001, avg=13324.05, stdev=5691.82 00:13:29.777 clat (usec): min=328, max=41144, avg=11210.08, stdev=18074.82 00:13:29.777 lat (usec): min=339, max=41152, avg=11223.41, stdev=18077.91 00:13:29.777 clat percentiles (usec): 00:13:29.777 | 1.00th=[ 330], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 347], 00:13:29.777 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 379], 00:13:29.777 | 70.00th=[ 562], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:29.777 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:29.777 | 99.99th=[41157] 00:13:29.777 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:13:29.777 slat (nsec): min=7650, max=66973, avg=22238.91, stdev=11220.50 00:13:29.777 clat (usec): min=207, max=465, avg=282.61, stdev=45.19 00:13:29.777 lat (usec): min=217, max=503, avg=304.85, stdev=47.82 00:13:29.777 clat percentiles (usec): 00:13:29.777 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 239], 00:13:29.777 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 289], 00:13:29.777 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 363], 00:13:29.777 | 99.00th=[ 392], 99.50th=[ 420], 99.90th=[ 465], 99.95th=[ 465], 00:13:29.777 | 99.99th=[ 465] 00:13:29.777 bw ( KiB/s): min= 4096, max= 4096, per=27.53%, avg=4096.00, stdev= 0.00, samples=1 00:13:29.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:29.777 lat (usec) : 250=23.17%, 500=72.57%, 750=0.68%, 1000=0.17% 00:13:29.777 lat (msec) : 50=3.41% 00:13:29.777 cpu : usr=0.50%, sys=1.30%, ctx=587, majf=0, minf=2 00:13:29.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.777 issued rwts: total=75,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.777 00:13:29.777 Run status group 0 (all jobs): 00:13:29.777 READ: bw=9473KiB/s (9700kB/s), 82.3KiB/s-5187KiB/s (84.2kB/s-5311kB/s), io=9672KiB (9904kB), run=1001-1021msec 00:13:29.777 WRITE: bw=14.5MiB/s (15.2MB/s), 2006KiB/s-6138KiB/s (2054kB/s-6285kB/s), io=14.8MiB (15.6MB), run=1001-1021msec 00:13:29.777 00:13:29.777 Disk stats (read/write): 00:13:29.777 nvme0n1: ios=1074/1408, merge=0/0, ticks=632/367, in_queue=999, util=90.08% 00:13:29.777 nvme0n2: ios=770/1024, merge=0/0, ticks=565/236, in_queue=801, util=85.88% 00:13:29.777 nvme0n3: ios=40/512, merge=0/0, ticks=1568/136, in_queue=1704, util=96.82% 00:13:29.777 nvme0n4: ios=16/512, merge=0/0, ticks=656/133, in_queue=789, util=89.52% 00:13:29.778 22:37:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:29.778 [global] 00:13:29.778 thread=1 00:13:29.778 invalidate=1 00:13:29.778 rw=randwrite 00:13:29.778 time_based=1 00:13:29.778 runtime=1 00:13:29.778 ioengine=libaio 00:13:29.778 direct=1 00:13:29.778 bs=4096 00:13:29.778 iodepth=1 00:13:29.778 norandommap=0 00:13:29.778 numjobs=1 00:13:29.778 00:13:29.778 verify_dump=1 00:13:29.778 verify_backlog=512 00:13:29.778 verify_state_save=0 00:13:29.778 do_verify=1 00:13:29.778 verify=crc32c-intel 00:13:29.778 [job0] 00:13:29.778 filename=/dev/nvme0n1 00:13:29.778 [job1] 00:13:29.778 filename=/dev/nvme0n2 00:13:29.778 [job2] 00:13:29.778 filename=/dev/nvme0n3 00:13:29.778 [job3] 00:13:29.778 filename=/dev/nvme0n4 00:13:30.036 Could not set queue depth (nvme0n1) 00:13:30.036 Could not set queue depth (nvme0n2) 00:13:30.036 Could not set queue depth (nvme0n3) 00:13:30.036 Could not set queue depth (nvme0n4) 00:13:30.036 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.036 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.036 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.036 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.036 fio-3.35 00:13:30.036 Starting 4 threads 00:13:31.410 00:13:31.410 job0: (groupid=0, jobs=1): err= 0: pid=1242700: Mon Jul 15 22:37:14 2024 00:13:31.410 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:13:31.410 slat (nsec): min=8911, max=43015, avg=17012.38, stdev=9700.23 00:13:31.410 clat (usec): min=40733, max=41199, avg=40966.20, stdev=87.99 00:13:31.410 lat (usec): min=40742, max=41213, avg=40983.21, stdev=88.06 00:13:31.410 clat percentiles (usec): 00:13:31.410 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:31.410 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:31.410 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:31.410 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:31.410 | 99.99th=[41157] 00:13:31.410 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:13:31.410 slat (nsec): min=8227, max=47651, avg=16104.92, stdev=6680.44 00:13:31.410 clat (usec): min=204, max=494, avg=251.70, stdev=22.65 00:13:31.410 lat (usec): min=212, max=513, avg=267.81, stdev=24.80 00:13:31.410 clat percentiles (usec): 00:13:31.410 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 237], 00:13:31.410 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:13:31.410 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:13:31.411 | 99.00th=[ 318], 99.50th=[ 392], 99.90th=[ 494], 99.95th=[ 494], 00:13:31.411 | 99.99th=[ 494] 00:13:31.411 bw ( KiB/s): min= 4096, max= 4096, per=33.79%, avg=4096.00, stdev= 0.00, samples=1 00:13:31.411 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:31.411 lat (usec) : 250=48.22%, 500=47.84% 00:13:31.411 lat (msec) : 50=3.94% 00:13:31.411 cpu : usr=0.40%, sys=1.30%, ctx=533, majf=0, minf=1 00:13:31.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.411 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.411 job1: (groupid=0, jobs=1): err= 0: pid=1242701: Mon Jul 15 22:37:14 2024 00:13:31.411 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:13:31.411 slat (nsec): min=7599, max=45777, avg=15462.33, stdev=7362.26 00:13:31.411 clat (usec): min=40930, max=41015, avg=40979.60, stdev=24.14 00:13:31.411 lat (usec): min=40937, max=41031, avg=40995.06, stdev=22.89 00:13:31.411 clat percentiles (usec): 00:13:31.411 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:31.411 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:31.411 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:31.411 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:31.411 | 99.99th=[41157] 00:13:31.411 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:13:31.411 slat (nsec): min=7655, max=61471, avg=18971.90, stdev=9124.63 00:13:31.411 clat (usec): min=212, max=415, avg=269.19, stdev=43.69 00:13:31.411 lat (usec): min=225, max=450, avg=288.17, stdev=45.18 00:13:31.411 clat percentiles (usec): 00:13:31.411 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 239], 00:13:31.411 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:13:31.411 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 347], 95.00th=[ 367], 00:13:31.411 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 416], 99.95th=[ 416], 00:13:31.411 | 99.99th=[ 416] 00:13:31.411 bw ( KiB/s): min= 4096, max= 4096, per=33.79%, avg=4096.00, stdev= 0.00, samples=1 00:13:31.411 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:31.411 lat (usec) : 250=45.03%, 500=51.03% 00:13:31.411 lat (msec) : 50=3.94% 00:13:31.411 cpu : usr=0.79%, sys=1.09%, ctx=536, majf=0, minf=1 00:13:31.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.411 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.411 job2: (groupid=0, jobs=1): err= 0: pid=1242702: Mon Jul 15 22:37:14 2024 00:13:31.411 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:31.411 slat (nsec): min=5764, max=69209, avg=12215.80, stdev=7853.70 00:13:31.411 clat (usec): min=289, max=778, avg=366.12, stdev=76.27 00:13:31.411 lat (usec): min=306, max=785, avg=378.34, stdev=81.63 00:13:31.411 clat percentiles (usec): 00:13:31.411 | 1.00th=[ 306], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:13:31.411 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:13:31.411 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 510], 95.00th=[ 553], 00:13:31.411 | 99.00th=[ 611], 99.50th=[ 619], 99.90th=[ 676], 99.95th=[ 783], 00:13:31.411 | 99.99th=[ 783] 00:13:31.411 write: IOPS=1617, BW=6470KiB/s (6625kB/s)(6476KiB/1001msec); 0 zone resets 00:13:31.411 slat (nsec): min=6777, max=62310, avg=13262.70, stdev=8672.20 00:13:31.411 clat (usec): min=191, max=837, avg=238.37, stdev=52.57 00:13:31.411 lat (usec): min=199, max=846, avg=251.64, stdev=57.98 00:13:31.411 clat percentiles (usec): 00:13:31.411 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 204], 00:13:31.411 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:13:31.411 | 70.00th=[ 243], 80.00th=[ 277], 90.00th=[ 306], 95.00th=[ 347], 00:13:31.411 | 99.00th=[ 396], 99.50th=[ 420], 99.90th=[ 766], 99.95th=[ 840], 00:13:31.411 | 99.99th=[ 840] 00:13:31.411 bw ( KiB/s): min= 8192, max= 8192, per=67.57%, avg=8192.00, stdev= 0.00, samples=1 00:13:31.411 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:31.411 lat (usec) : 250=37.24%, 500=56.55%, 750=6.12%, 1000=0.10% 00:13:31.411 cpu : usr=2.90%, sys=5.40%, ctx=3156, majf=0, minf=1 00:13:31.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.411 issued rwts: total=1536,1619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.411 job3: (groupid=0, jobs=1): err= 0: pid=1242703: Mon Jul 15 22:37:14 2024 00:13:31.411 read: IOPS=37, BW=150KiB/s (153kB/s)(156KiB/1041msec) 00:13:31.411 slat (nsec): min=7739, max=46356, avg=19761.51, stdev=9862.25 00:13:31.411 clat (usec): min=435, max=41160, avg=22307.03, stdev=20430.58 00:13:31.411 lat (usec): min=469, max=41182, avg=22326.80, stdev=20426.94 00:13:31.411 clat percentiles (usec): 00:13:31.411 | 1.00th=[ 437], 5.00th=[ 445], 10.00th=[ 482], 20.00th=[ 498], 00:13:31.411 | 30.00th=[ 529], 40.00th=[ 611], 50.00th=[41157], 60.00th=[41157], 00:13:31.411 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:31.411 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:31.411 | 99.99th=[41157] 00:13:31.411 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:13:31.411 slat (nsec): min=6525, max=76974, avg=22311.42, stdev=11213.20 00:13:31.411 clat (usec): min=198, max=933, avg=302.47, stdev=70.34 00:13:31.411 lat (usec): min=231, max=974, avg=324.78, stdev=69.56 00:13:31.411 clat percentiles (usec): 00:13:31.411 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:13:31.411 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 306], 00:13:31.411 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 379], 95.00th=[ 441], 00:13:31.411 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 938], 99.95th=[ 938], 00:13:31.411 | 99.99th=[ 938] 00:13:31.411 bw ( KiB/s): min= 4096, max= 4096, per=33.79%, avg=4096.00, stdev= 0.00, samples=1 00:13:31.411 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:31.411 lat (usec) : 250=25.59%, 500=67.33%, 750=3.09%, 1000=0.18% 00:13:31.411 lat (msec) : 50=3.81% 00:13:31.411 cpu : usr=0.48%, sys=1.25%, ctx=552, majf=0, minf=2 00:13:31.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.411 issued rwts: total=39,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.411 00:13:31.411 Run status group 0 (all jobs): 00:13:31.411 READ: bw=6213KiB/s (6362kB/s), 83.0KiB/s-6138KiB/s (85.0kB/s-6285kB/s), io=6468KiB (6623kB), run=1001-1041msec 00:13:31.411 WRITE: bw=11.8MiB/s (12.4MB/s), 1967KiB/s-6470KiB/s (2015kB/s-6625kB/s), io=12.3MiB (12.9MB), run=1001-1041msec 00:13:31.411 00:13:31.411 Disk stats (read/write): 00:13:31.411 nvme0n1: ios=67/512, merge=0/0, ticks=723/117, in_queue=840, util=86.67% 00:13:31.411 nvme0n2: ios=42/512, merge=0/0, ticks=1686/140, in_queue=1826, util=96.54% 00:13:31.411 nvme0n3: ios=1321/1536, merge=0/0, ticks=1406/353, in_queue=1759, util=97.70% 00:13:31.411 nvme0n4: ios=57/512, merge=0/0, ticks=1616/141, in_queue=1757, util=96.94% 00:13:31.411 22:37:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:31.411 [global] 00:13:31.411 thread=1 00:13:31.411 invalidate=1 00:13:31.411 rw=write 00:13:31.411 time_based=1 00:13:31.411 runtime=1 00:13:31.411 ioengine=libaio 00:13:31.411 direct=1 00:13:31.411 bs=4096 00:13:31.411 iodepth=128 00:13:31.411 norandommap=0 00:13:31.411 numjobs=1 00:13:31.411 00:13:31.411 verify_dump=1 00:13:31.411 verify_backlog=512 00:13:31.411 verify_state_save=0 00:13:31.411 do_verify=1 00:13:31.411 verify=crc32c-intel 00:13:31.411 [job0] 00:13:31.411 filename=/dev/nvme0n1 00:13:31.411 [job1] 00:13:31.411 filename=/dev/nvme0n2 00:13:31.411 [job2] 00:13:31.411 filename=/dev/nvme0n3 00:13:31.411 [job3] 00:13:31.411 filename=/dev/nvme0n4 00:13:31.411 Could not set queue depth (nvme0n1) 00:13:31.411 Could not set queue depth (nvme0n2) 00:13:31.411 Could not set queue depth (nvme0n3) 00:13:31.411 Could not set queue depth (nvme0n4) 00:13:31.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:31.672 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:31.672 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:31.672 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:31.672 fio-3.35 00:13:31.672 Starting 4 threads 00:13:33.084 00:13:33.084 job0: (groupid=0, jobs=1): err= 0: pid=1242939: Mon Jul 15 22:37:16 2024 00:13:33.084 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:13:33.084 slat (usec): min=2, max=10695, avg=208.92, stdev=1071.62 00:13:33.084 clat (usec): min=9297, max=69284, avg=25803.15, stdev=11017.60 00:13:33.084 lat (usec): min=9323, max=69302, avg=26012.07, stdev=11055.93 00:13:33.084 clat percentiles (usec): 00:13:33.084 | 1.00th=[10028], 5.00th=[11731], 10.00th=[13173], 20.00th=[19006], 00:13:33.084 | 30.00th=[21365], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:13:33.084 | 70.00th=[25560], 80.00th=[31327], 90.00th=[40633], 95.00th=[47449], 00:13:33.084 | 99.00th=[67634], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:13:33.084 | 99.99th=[69731] 00:13:33.084 write: IOPS=2816, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1003msec); 0 zone resets 00:13:33.084 slat (usec): min=3, max=13820, avg=155.47, stdev=719.31 00:13:33.084 clat (usec): min=1578, max=35794, avg=21134.51, stdev=6496.56 00:13:33.084 lat (usec): min=2589, max=35810, avg=21289.97, stdev=6510.44 00:13:33.084 clat percentiles (usec): 00:13:33.084 | 1.00th=[ 6325], 5.00th=[10683], 10.00th=[11600], 20.00th=[14746], 00:13:33.084 | 30.00th=[17957], 40.00th=[19530], 50.00th=[21627], 60.00th=[23462], 00:13:33.084 | 70.00th=[25035], 80.00th=[27132], 90.00th=[28967], 95.00th=[31327], 00:13:33.084 | 99.00th=[33424], 99.50th=[33817], 99.90th=[35914], 99.95th=[35914], 00:13:33.084 | 99.99th=[35914] 00:13:33.084 bw ( KiB/s): min=10176, max=11408, per=17.19%, avg=10792.00, stdev=871.16, samples=2 00:13:33.084 iops : min= 2544, max= 2852, avg=2698.00, stdev=217.79, samples=2 00:13:33.084 lat (msec) : 2=0.02%, 4=0.26%, 10=2.41%, 20=29.92%, 50=65.05% 00:13:33.084 lat (msec) : 100=2.34% 00:13:33.084 cpu : usr=4.29%, sys=4.19%, ctx=326, majf=0, minf=1 00:13:33.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:33.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.084 issued rwts: total=2560,2825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.084 job1: (groupid=0, jobs=1): err= 0: pid=1242940: Mon Jul 15 22:37:16 2024 00:13:33.084 read: IOPS=5724, BW=22.4MiB/s (23.4MB/s)(22.5MiB/1006msec) 00:13:33.084 slat (usec): min=3, max=8926, avg=84.49, stdev=590.42 00:13:33.084 clat (usec): min=2085, max=21004, avg=10833.51, stdev=2568.32 00:13:33.084 lat (usec): min=4297, max=21017, avg=10918.00, stdev=2600.20 00:13:33.084 clat percentiles (usec): 00:13:33.084 | 1.00th=[ 6325], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9241], 00:13:33.084 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:13:33.084 | 70.00th=[11076], 80.00th=[12780], 90.00th=[14746], 95.00th=[16450], 00:13:33.084 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006], 00:13:33.084 | 99.99th=[21103] 00:13:33.084 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:13:33.084 slat (usec): min=3, max=25010, avg=76.06, stdev=579.86 00:13:33.084 clat (usec): min=3049, max=53921, avg=10048.55, stdev=4709.13 00:13:33.084 lat (usec): min=3067, max=53943, avg=10124.61, stdev=4753.44 00:13:33.084 clat percentiles (usec): 00:13:33.084 | 1.00th=[ 3687], 5.00th=[ 5211], 10.00th=[ 6194], 20.00th=[ 7046], 00:13:33.084 | 30.00th=[ 7832], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[10552], 00:13:33.084 | 70.00th=[10683], 80.00th=[11207], 90.00th=[12780], 95.00th=[14222], 00:13:33.084 | 99.00th=[36963], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:33.084 | 99.99th=[53740] 00:13:33.084 bw ( KiB/s): min=24560, max=24584, per=39.15%, avg=24572.00, stdev=16.97, samples=2 00:13:33.084 iops : min= 6140, max= 6146, avg=6143.00, stdev= 4.24, samples=2 00:13:33.084 lat (msec) : 4=1.01%, 10=49.23%, 20=48.59%, 50=1.16%, 100=0.01% 00:13:33.084 cpu : usr=6.46%, sys=9.54%, ctx=525, majf=0, minf=1 00:13:33.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:33.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.084 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.084 job2: (groupid=0, jobs=1): err= 0: pid=1242941: Mon Jul 15 22:37:16 2024 00:13:33.084 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:13:33.085 slat (usec): min=2, max=20955, avg=147.40, stdev=960.36 00:13:33.085 clat (usec): min=9032, max=75113, avg=18670.21, stdev=8874.46 00:13:33.085 lat (usec): min=9677, max=75159, avg=18817.61, stdev=8950.34 00:13:33.085 clat percentiles (usec): 00:13:33.085 | 1.00th=[10290], 5.00th=[11338], 10.00th=[13042], 20.00th=[14222], 00:13:33.085 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15533], 60.00th=[16319], 00:13:33.085 | 70.00th=[18744], 80.00th=[22152], 90.00th=[25297], 95.00th=[37487], 00:13:33.085 | 99.00th=[64226], 99.50th=[64226], 99.90th=[66323], 99.95th=[66323], 00:13:33.085 | 99.99th=[74974] 00:13:33.085 write: IOPS=3747, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1001msec); 0 zone resets 00:13:33.085 slat (usec): min=3, max=10209, avg=119.14, stdev=740.67 00:13:33.085 clat (usec): min=479, max=32592, avg=16032.20, stdev=4596.20 00:13:33.085 lat (usec): min=1000, max=32597, avg=16151.34, stdev=4629.15 00:13:33.085 clat percentiles (usec): 00:13:33.085 | 1.00th=[ 3556], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[12256], 00:13:33.085 | 30.00th=[13435], 40.00th=[14353], 50.00th=[15795], 60.00th=[17433], 00:13:33.085 | 70.00th=[19006], 80.00th=[20317], 90.00th=[21627], 95.00th=[23725], 00:13:33.085 | 99.00th=[26608], 99.50th=[27132], 99.90th=[32637], 99.95th=[32637], 00:13:33.085 | 99.99th=[32637] 00:13:33.085 bw ( KiB/s): min=13779, max=15240, per=23.12%, avg=14509.50, stdev=1033.08, samples=2 00:13:33.085 iops : min= 3444, max= 3810, avg=3627.00, stdev=258.80, samples=2 00:13:33.085 lat (usec) : 500=0.01%, 1000=0.03% 00:13:33.085 lat (msec) : 2=0.14%, 4=0.44%, 10=3.57%, 20=72.11%, 50=22.55% 00:13:33.085 lat (msec) : 100=1.16% 00:13:33.085 cpu : usr=2.20%, sys=4.40%, ctx=317, majf=0, minf=1 00:13:33.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:33.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.085 issued rwts: total=3584,3751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.085 job3: (groupid=0, jobs=1): err= 0: pid=1242942: Mon Jul 15 22:37:16 2024 00:13:33.085 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:13:33.085 slat (usec): min=2, max=15730, avg=178.23, stdev=970.27 00:13:33.085 clat (usec): min=14635, max=49337, avg=22699.63, stdev=5919.82 00:13:33.085 lat (usec): min=14642, max=49353, avg=22877.86, stdev=5994.87 00:13:33.085 clat percentiles (usec): 00:13:33.085 | 1.00th=[15926], 5.00th=[17433], 10.00th=[17957], 20.00th=[18482], 00:13:33.085 | 30.00th=[19006], 40.00th=[19792], 50.00th=[20317], 60.00th=[21627], 00:13:33.085 | 70.00th=[23987], 80.00th=[25560], 90.00th=[29492], 95.00th=[38011], 00:13:33.085 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46924], 99.95th=[49021], 00:13:33.085 | 99.99th=[49546] 00:13:33.085 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:13:33.085 slat (usec): min=3, max=28488, avg=168.27, stdev=840.54 00:13:33.085 clat (usec): min=4403, max=45277, avg=22454.73, stdev=5863.38 00:13:33.085 lat (usec): min=6859, max=45285, avg=22623.01, stdev=5882.61 00:13:33.085 clat percentiles (usec): 00:13:33.085 | 1.00th=[ 9896], 5.00th=[14091], 10.00th=[14353], 20.00th=[16581], 00:13:33.085 | 30.00th=[20055], 40.00th=[22152], 50.00th=[23200], 60.00th=[23462], 00:13:33.085 | 70.00th=[23725], 80.00th=[25822], 90.00th=[29754], 95.00th=[32900], 00:13:33.085 | 99.00th=[37487], 99.50th=[37487], 99.90th=[45351], 99.95th=[45351], 00:13:33.085 | 99.99th=[45351] 00:13:33.085 bw ( KiB/s): min=11224, max=12288, per=18.73%, avg=11756.00, stdev=752.36, samples=2 00:13:33.085 iops : min= 2806, max= 3072, avg=2939.00, stdev=188.09, samples=2 00:13:33.085 lat (msec) : 10=0.59%, 20=36.15%, 50=63.26% 00:13:33.085 cpu : usr=3.88%, sys=5.38%, ctx=368, majf=0, minf=1 00:13:33.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:33.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.085 issued rwts: total=2560,3066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.085 00:13:33.085 Run status group 0 (all jobs): 00:13:33.085 READ: bw=56.2MiB/s (58.9MB/s), 9.95MiB/s-22.4MiB/s (10.4MB/s-23.4MB/s), io=56.5MiB (59.2MB), run=1001-1006msec 00:13:33.085 WRITE: bw=61.3MiB/s (64.3MB/s), 11.0MiB/s-23.9MiB/s (11.5MB/s-25.0MB/s), io=61.7MiB (64.7MB), run=1001-1006msec 00:13:33.085 00:13:33.085 Disk stats (read/write): 00:13:33.085 nvme0n1: ios=2071/2142, merge=0/0, ticks=16442/11359, in_queue=27801, util=97.19% 00:13:33.085 nvme0n2: ios=4910/5120, merge=0/0, ticks=51490/45662, in_queue=97152, util=97.87% 00:13:33.085 nvme0n3: ios=2959/3072, merge=0/0, ticks=18855/19163, in_queue=38018, util=87.46% 00:13:33.085 nvme0n4: ios=2324/2560, merge=0/0, ticks=17919/18830, in_queue=36749, util=97.79% 00:13:33.085 22:37:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:33.085 [global] 00:13:33.085 thread=1 00:13:33.085 invalidate=1 00:13:33.085 rw=randwrite 00:13:33.085 time_based=1 00:13:33.085 runtime=1 00:13:33.085 ioengine=libaio 00:13:33.085 direct=1 00:13:33.085 bs=4096 00:13:33.085 iodepth=128 00:13:33.085 norandommap=0 00:13:33.085 numjobs=1 00:13:33.085 00:13:33.085 verify_dump=1 00:13:33.085 verify_backlog=512 00:13:33.085 verify_state_save=0 00:13:33.085 do_verify=1 00:13:33.085 verify=crc32c-intel 00:13:33.085 [job0] 00:13:33.085 filename=/dev/nvme0n1 00:13:33.085 [job1] 00:13:33.085 filename=/dev/nvme0n2 00:13:33.085 [job2] 00:13:33.085 filename=/dev/nvme0n3 00:13:33.085 [job3] 00:13:33.085 filename=/dev/nvme0n4 00:13:33.085 Could not set queue depth (nvme0n1) 00:13:33.085 Could not set queue depth (nvme0n2) 00:13:33.085 Could not set queue depth (nvme0n3) 00:13:33.085 Could not set queue depth (nvme0n4) 00:13:33.085 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.085 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.085 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.085 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.085 fio-3.35 00:13:33.085 Starting 4 threads 00:13:34.499 00:13:34.500 job0: (groupid=0, jobs=1): err= 0: pid=1243167: Mon Jul 15 22:37:17 2024 00:13:34.500 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:13:34.500 slat (usec): min=2, max=24104, avg=134.86, stdev=843.29 00:13:34.500 clat (usec): min=7066, max=41190, avg=17133.10, stdev=6967.65 00:13:34.500 lat (usec): min=7073, max=41243, avg=17267.96, stdev=7015.42 00:13:34.500 clat percentiles (usec): 00:13:34.500 | 1.00th=[ 8356], 5.00th=[ 9372], 10.00th=[11207], 20.00th=[11863], 00:13:34.500 | 30.00th=[12780], 40.00th=[14484], 50.00th=[15139], 60.00th=[16057], 00:13:34.500 | 70.00th=[17433], 80.00th=[21890], 90.00th=[28705], 95.00th=[33162], 00:13:34.500 | 99.00th=[39584], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:34.500 | 99.99th=[41157] 00:13:34.500 write: IOPS=3748, BW=14.6MiB/s (15.4MB/s)(14.7MiB/1003msec); 0 zone resets 00:13:34.500 slat (usec): min=3, max=15949, avg=129.16, stdev=782.18 00:13:34.500 clat (usec): min=2550, max=38805, avg=17474.63, stdev=7146.68 00:13:34.500 lat (usec): min=2568, max=38817, avg=17603.79, stdev=7189.26 00:13:34.500 clat percentiles (usec): 00:13:34.500 | 1.00th=[ 5997], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11863], 00:13:34.500 | 30.00th=[13435], 40.00th=[14615], 50.00th=[16188], 60.00th=[16712], 00:13:34.500 | 70.00th=[17957], 80.00th=[21627], 90.00th=[29492], 95.00th=[35390], 00:13:34.500 | 99.00th=[37487], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:13:34.500 | 99.99th=[39060] 00:13:34.500 bw ( KiB/s): min=13824, max=15240, per=22.00%, avg=14532.00, stdev=1001.26, samples=2 00:13:34.500 iops : min= 3456, max= 3810, avg=3633.00, stdev=250.32, samples=2 00:13:34.500 lat (msec) : 4=0.48%, 10=5.51%, 20=70.34%, 50=23.67% 00:13:34.500 cpu : usr=3.69%, sys=5.49%, ctx=324, majf=0, minf=11 00:13:34.500 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:34.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:34.500 issued rwts: total=3584,3760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:34.500 job1: (groupid=0, jobs=1): err= 0: pid=1243168: Mon Jul 15 22:37:17 2024 00:13:34.500 read: IOPS=5152, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1002msec) 00:13:34.500 slat (usec): min=2, max=10393, avg=91.12, stdev=584.33 00:13:34.500 clat (usec): min=1366, max=25343, avg=11910.54, stdev=3087.05 00:13:34.500 lat (usec): min=2759, max=25363, avg=12001.66, stdev=3117.18 00:13:34.500 clat percentiles (usec): 00:13:34.500 | 1.00th=[ 6128], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10159], 00:13:34.500 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:13:34.500 | 70.00th=[11994], 80.00th=[13435], 90.00th=[16319], 95.00th=[18220], 00:13:34.500 | 99.00th=[23725], 99.50th=[24511], 99.90th=[25035], 99.95th=[25035], 00:13:34.500 | 99.99th=[25297] 00:13:34.500 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:13:34.500 slat (usec): min=3, max=12223, avg=84.61, stdev=509.27 00:13:34.500 clat (usec): min=2928, max=27511, avg=11629.61, stdev=3304.16 00:13:34.500 lat (usec): min=2939, max=27533, avg=11714.22, stdev=3324.91 00:13:34.500 clat percentiles (usec): 00:13:34.500 | 1.00th=[ 4359], 5.00th=[ 6849], 10.00th=[ 8979], 20.00th=[10028], 00:13:34.500 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11207], 00:13:34.500 | 70.00th=[12125], 80.00th=[13304], 90.00th=[14746], 95.00th=[18744], 00:13:34.500 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:13:34.500 | 99.99th=[27395] 00:13:34.500 bw ( KiB/s): min=20480, max=23904, per=33.60%, avg=22192.00, stdev=2421.13, samples=2 00:13:34.500 iops : min= 5120, max= 5976, avg=5548.00, stdev=605.28, samples=2 00:13:34.500 lat (msec) : 2=0.01%, 4=0.56%, 10=17.97%, 20=78.07%, 50=3.39% 00:13:34.500 cpu : usr=6.59%, sys=8.99%, ctx=464, majf=0, minf=9 00:13:34.500 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:34.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:34.500 issued rwts: total=5163,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:34.500 job2: (groupid=0, jobs=1): err= 0: pid=1243169: Mon Jul 15 22:37:17 2024 00:13:34.500 read: IOPS=3576, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:13:34.500 slat (usec): min=3, max=22226, avg=141.30, stdev=928.83 00:13:34.500 clat (usec): min=1769, max=57217, avg=17931.85, stdev=9927.77 00:13:34.500 lat (usec): min=3052, max=57235, avg=18073.15, stdev=10009.80 00:13:34.500 clat percentiles (usec): 00:13:34.500 | 1.00th=[ 9896], 5.00th=[11207], 10.00th=[11994], 20.00th=[12649], 00:13:34.500 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14353], 00:13:34.500 | 70.00th=[15008], 80.00th=[17695], 90.00th=[37487], 95.00th=[44303], 00:13:34.500 | 99.00th=[48497], 99.50th=[52691], 99.90th=[55313], 99.95th=[56886], 00:13:34.500 | 99.99th=[57410] 00:13:34.500 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:13:34.500 slat (usec): min=4, max=41499, avg=111.27, stdev=804.32 00:13:34.500 clat (usec): min=3095, max=53428, avg=14973.64, stdev=7616.37 00:13:34.500 lat (usec): min=3112, max=54075, avg=15084.92, stdev=7640.11 00:13:34.500 clat percentiles (usec): 00:13:34.500 | 1.00th=[ 6587], 5.00th=[10552], 10.00th=[11338], 20.00th=[12256], 00:13:34.500 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:13:34.500 | 70.00th=[13829], 80.00th=[14615], 90.00th=[17957], 95.00th=[30802], 00:13:34.500 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:13:34.500 | 99.99th=[53216] 00:13:34.500 bw ( KiB/s): min=12288, max=19480, per=24.05%, avg=15884.00, stdev=5085.51, samples=2 00:13:34.500 iops : min= 3072, max= 4870, avg=3971.00, stdev=1271.38, samples=2 00:13:34.500 lat (msec) : 2=0.01%, 4=0.42%, 10=1.68%, 20=85.60%, 50=10.27% 00:13:34.500 lat (msec) : 100=2.02% 00:13:34.500 cpu : usr=5.39%, sys=7.49%, ctx=493, majf=0, minf=15 00:13:34.500 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:34.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:34.500 issued rwts: total=3587,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:34.500 job3: (groupid=0, jobs=1): err= 0: pid=1243170: Mon Jul 15 22:37:17 2024 00:13:34.500 read: IOPS=2843, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1003msec) 00:13:34.500 slat (usec): min=3, max=13376, avg=179.15, stdev=967.83 00:13:34.500 clat (usec): min=563, max=61394, avg=23641.56, stdev=13181.40 00:13:34.500 lat (usec): min=3320, max=61409, avg=23820.71, stdev=13253.94 00:13:34.500 clat percentiles (usec): 00:13:34.500 | 1.00th=[ 4146], 5.00th=[10159], 10.00th=[11207], 20.00th=[12780], 00:13:34.500 | 30.00th=[15139], 40.00th=[17433], 50.00th=[19268], 60.00th=[21890], 00:13:34.500 | 70.00th=[26346], 80.00th=[32637], 90.00th=[45351], 95.00th=[51119], 00:13:34.500 | 99.00th=[58459], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:13:34.500 | 99.99th=[61604] 00:13:34.500 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:13:34.500 slat (usec): min=4, max=21023, avg=150.55, stdev=963.77 00:13:34.500 clat (usec): min=6772, max=56804, avg=19380.81, stdev=8861.01 00:13:34.500 lat (usec): min=6780, max=56813, avg=19531.36, stdev=8926.95 00:13:34.500 clat percentiles (usec): 00:13:34.500 | 1.00th=[ 7635], 5.00th=[10552], 10.00th=[11338], 20.00th=[12125], 00:13:34.500 | 30.00th=[14746], 40.00th=[16188], 50.00th=[16581], 60.00th=[18220], 00:13:34.500 | 70.00th=[19006], 80.00th=[26084], 90.00th=[34866], 95.00th=[37487], 00:13:34.500 | 99.00th=[46400], 99.50th=[52167], 99.90th=[56886], 99.95th=[56886], 00:13:34.500 | 99.99th=[56886] 00:13:34.500 bw ( KiB/s): min= 8192, max=16384, per=18.61%, avg=12288.00, stdev=5792.62, samples=2 00:13:34.500 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:13:34.500 lat (usec) : 750=0.02% 00:13:34.500 lat (msec) : 4=0.46%, 10=3.21%, 20=61.73%, 50=30.27%, 100=4.32% 00:13:34.500 cpu : usr=3.79%, sys=5.59%, ctx=250, majf=0, minf=15 00:13:34.500 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:13:34.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:34.500 issued rwts: total=2852,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:34.500 00:13:34.500 Run status group 0 (all jobs): 00:13:34.500 READ: bw=59.1MiB/s (62.0MB/s), 11.1MiB/s-20.1MiB/s (11.6MB/s-21.1MB/s), io=59.3MiB (62.2MB), run=1002-1003msec 00:13:34.500 WRITE: bw=64.5MiB/s (67.6MB/s), 12.0MiB/s-22.0MiB/s (12.5MB/s-23.0MB/s), io=64.7MiB (67.8MB), run=1002-1003msec 00:13:34.500 00:13:34.500 Disk stats (read/write): 00:13:34.500 nvme0n1: ios=2845/3072, merge=0/0, ticks=22087/25054, in_queue=47141, util=99.30% 00:13:34.500 nvme0n2: ios=4375/4608, merge=0/0, ticks=34420/34259, in_queue=68679, util=86.15% 00:13:34.500 nvme0n3: ios=3098/3151, merge=0/0, ticks=19148/12038, in_queue=31186, util=97.59% 00:13:34.500 nvme0n4: ios=2496/2560, merge=0/0, ticks=16809/14432, in_queue=31241, util=89.52% 00:13:34.500 22:37:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:34.500 22:37:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1243311 00:13:34.500 22:37:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:34.500 22:37:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:34.500 [global] 00:13:34.500 thread=1 00:13:34.500 invalidate=1 00:13:34.500 rw=read 00:13:34.500 time_based=1 00:13:34.500 runtime=10 00:13:34.500 ioengine=libaio 00:13:34.500 direct=1 00:13:34.500 bs=4096 00:13:34.500 iodepth=1 00:13:34.500 norandommap=1 00:13:34.500 numjobs=1 00:13:34.500 00:13:34.500 [job0] 00:13:34.500 filename=/dev/nvme0n1 00:13:34.500 [job1] 00:13:34.500 filename=/dev/nvme0n2 00:13:34.500 [job2] 00:13:34.500 filename=/dev/nvme0n3 00:13:34.500 [job3] 00:13:34.500 filename=/dev/nvme0n4 00:13:34.500 Could not set queue depth (nvme0n1) 00:13:34.500 Could not set queue depth (nvme0n2) 00:13:34.500 Could not set queue depth (nvme0n3) 00:13:34.500 Could not set queue depth (nvme0n4) 00:13:34.500 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.500 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.500 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.500 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.500 fio-3.35 00:13:34.500 Starting 4 threads 00:13:37.791 22:37:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:37.791 22:37:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:37.791 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6836224, buflen=4096 00:13:37.791 fio: pid=1243526, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:37.791 22:37:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:37.791 22:37:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:37.791 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=3919872, buflen=4096 00:13:37.791 fio: pid=1243525, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:38.049 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=24559616, buflen=4096 00:13:38.049 fio: pid=1243523, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:38.049 22:37:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.049 22:37:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:38.307 22:37:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.307 22:37:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:38.307 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10129408, buflen=4096 00:13:38.307 fio: pid=1243524, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:38.307 00:13:38.307 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1243523: Mon Jul 15 22:37:21 2024 00:13:38.307 read: IOPS=1741, BW=6966KiB/s (7133kB/s)(23.4MiB/3443msec) 00:13:38.307 slat (usec): min=3, max=19427, avg=15.26, stdev=253.59 00:13:38.307 clat (usec): min=283, max=41273, avg=552.05, stdev=2398.05 00:13:38.307 lat (usec): min=291, max=41280, avg=566.82, stdev=2411.01 00:13:38.307 clat percentiles (usec): 00:13:38.307 | 1.00th=[ 297], 5.00th=[ 322], 10.00th=[ 351], 20.00th=[ 388], 00:13:38.307 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 420], 00:13:38.307 | 70.00th=[ 429], 80.00th=[ 437], 90.00th=[ 449], 95.00th=[ 474], 00:13:38.307 | 99.00th=[ 553], 99.50th=[ 873], 99.90th=[41157], 99.95th=[41157], 00:13:38.307 | 99.99th=[41157] 00:13:38.307 bw ( KiB/s): min= 3136, max= 9536, per=63.85%, avg=7589.33, stdev=2618.98, samples=6 00:13:38.307 iops : min= 784, max= 2384, avg=1897.33, stdev=654.75, samples=6 00:13:38.307 lat (usec) : 500=97.48%, 750=1.97%, 1000=0.08% 00:13:38.307 lat (msec) : 2=0.07%, 4=0.03%, 50=0.35% 00:13:38.307 cpu : usr=1.25%, sys=3.17%, ctx=6000, majf=0, minf=1 00:13:38.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.307 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.307 issued rwts: total=5997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.307 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1243524: Mon Jul 15 22:37:21 2024 00:13:38.307 read: IOPS=662, BW=2649KiB/s (2713kB/s)(9892KiB/3734msec) 00:13:38.307 slat (nsec): min=5156, max=49291, avg=9447.01, stdev=5234.98 00:13:38.307 clat (usec): min=292, max=49475, avg=1489.10, stdev=6719.53 00:13:38.307 lat (usec): min=298, max=49481, avg=1498.55, stdev=6721.46 00:13:38.307 clat percentiles (usec): 00:13:38.307 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 00:13:38.307 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:13:38.307 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 429], 95.00th=[ 494], 00:13:38.308 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:13:38.308 | 99.99th=[49546] 00:13:38.308 bw ( KiB/s): min= 96, max= 7168, per=22.45%, avg=2668.14, stdev=3254.52, samples=7 00:13:38.308 iops : min= 24, max= 1792, avg=667.00, stdev=813.60, samples=7 00:13:38.308 lat (usec) : 500=95.07%, 750=1.86%, 1000=0.08% 00:13:38.308 lat (msec) : 2=0.12%, 4=0.04%, 50=2.79% 00:13:38.308 cpu : usr=0.38%, sys=1.02%, ctx=2476, majf=0, minf=1 00:13:38.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.308 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.308 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.308 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1243525: Mon Jul 15 22:37:21 2024 00:13:38.308 read: IOPS=300, BW=1203KiB/s (1232kB/s)(3828KiB/3183msec) 00:13:38.308 slat (usec): min=4, max=18056, avg=48.79, stdev=766.74 00:13:38.308 clat (usec): min=311, max=44015, avg=3250.16, stdev=10296.45 00:13:38.308 lat (usec): min=317, max=44037, avg=3298.96, stdev=10318.31 00:13:38.308 clat percentiles (usec): 00:13:38.308 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 338], 20.00th=[ 363], 00:13:38.308 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 424], 60.00th=[ 469], 00:13:38.308 | 70.00th=[ 510], 80.00th=[ 545], 90.00th=[ 783], 95.00th=[41157], 00:13:38.308 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:13:38.308 | 99.99th=[43779] 00:13:38.308 bw ( KiB/s): min= 96, max= 4656, per=7.23%, avg=860.00, stdev=1859.66, samples=6 00:13:38.308 iops : min= 24, max= 1164, avg=215.00, stdev=464.92, samples=6 00:13:38.308 lat (usec) : 500=66.81%, 750=21.82%, 1000=4.18% 00:13:38.308 lat (msec) : 2=0.21%, 50=6.89% 00:13:38.308 cpu : usr=0.19%, sys=0.47%, ctx=961, majf=0, minf=1 00:13:38.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.308 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.308 issued rwts: total=958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.308 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1243526: Mon Jul 15 22:37:21 2024 00:13:38.308 read: IOPS=572, BW=2289KiB/s (2344kB/s)(6676KiB/2916msec) 00:13:38.308 slat (nsec): min=4364, max=68692, avg=13502.69, stdev=9005.80 00:13:38.308 clat (usec): min=318, max=42608, avg=1716.50, stdev=7280.64 00:13:38.308 lat (usec): min=323, max=42635, avg=1729.99, stdev=7282.30 00:13:38.308 clat percentiles (usec): 00:13:38.308 | 1.00th=[ 326], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:13:38.308 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 375], 60.00th=[ 379], 00:13:38.308 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 449], 00:13:38.308 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:13:38.308 | 99.99th=[42730] 00:13:38.308 bw ( KiB/s): min= 96, max=10352, per=22.33%, avg=2654.40, stdev=4357.43, samples=5 00:13:38.308 iops : min= 24, max= 2588, avg=663.60, stdev=1089.36, samples=5 00:13:38.308 lat (usec) : 500=96.23%, 750=0.18%, 1000=0.18% 00:13:38.308 lat (msec) : 2=0.06%, 50=3.29% 00:13:38.308 cpu : usr=0.34%, sys=0.89%, ctx=1670, majf=0, minf=1 00:13:38.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.308 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.308 issued rwts: total=1670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.308 00:13:38.308 Run status group 0 (all jobs): 00:13:38.308 READ: bw=11.6MiB/s (12.2MB/s), 1203KiB/s-6966KiB/s (1232kB/s-7133kB/s), io=43.3MiB (45.4MB), run=2916-3734msec 00:13:38.308 00:13:38.308 Disk stats (read/write): 00:13:38.308 nvme0n1: ios=5994/0, merge=0/0, ticks=3177/0, in_queue=3177, util=95.42% 00:13:38.308 nvme0n2: ios=2470/0, merge=0/0, ticks=3534/0, in_queue=3534, util=96.57% 00:13:38.308 nvme0n3: ios=845/0, merge=0/0, ticks=3044/0, in_queue=3044, util=95.79% 00:13:38.308 nvme0n4: ios=1668/0, merge=0/0, ticks=2817/0, in_queue=2817, util=96.71% 00:13:38.566 22:37:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.566 22:37:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:38.824 22:37:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.824 22:37:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:39.082 22:37:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.082 22:37:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:39.340 22:37:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.340 22:37:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:39.598 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:39.598 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1243311 00:13:39.598 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:39.598 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.856 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.856 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1213 -- # local i=0 00:13:39.856 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:13:39.856 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.856 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:13:39.856 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.856 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1225 -- # return 0 00:13:39.856 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:39.856 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:39.856 nvmf hotplug test: fio failed as expected 00:13:39.856 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.115 rmmod nvme_tcp 00:13:40.115 rmmod nvme_fabrics 00:13:40.115 rmmod nvme_keyring 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1241398 ']' 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1241398 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@942 -- # '[' -z 1241398 ']' 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # kill -0 1241398 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # uname 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1241398 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1241398' 00:13:40.115 killing process with pid 1241398 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@961 -- # kill 1241398 00:13:40.115 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # wait 1241398 00:13:40.373 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:40.373 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:40.373 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:40.373 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.373 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:40.373 22:37:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.373 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.373 22:37:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.908 22:37:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:42.908 00:13:42.908 real 0m23.374s 00:13:42.908 user 1m21.406s 00:13:42.908 sys 0m6.466s 00:13:42.908 22:37:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:42.908 22:37:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.908 ************************************ 00:13:42.908 END TEST nvmf_fio_target 00:13:42.908 ************************************ 00:13:42.908 22:37:25 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:42.908 22:37:25 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:42.908 22:37:25 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:42.908 22:37:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:42.908 22:37:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:42.908 ************************************ 00:13:42.908 START TEST nvmf_bdevio 00:13:42.908 ************************************ 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:42.908 * Looking for test storage... 00:13:42.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.908 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:42.909 22:37:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.816 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:44.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:44.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:44.817 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:44.817 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.817 22:37:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:44.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:13:44.817 00:13:44.817 --- 10.0.0.2 ping statistics --- 00:13:44.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.817 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:13:44.817 00:13:44.817 --- 10.0.0.1 ping statistics --- 00:13:44.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.817 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1246147 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1246147 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@823 -- # '[' -z 1246147 ']' 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:44.817 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:44.817 [2024-07-15 22:37:28.172510] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:13:44.817 [2024-07-15 22:37:28.172604] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.817 [2024-07-15 22:37:28.247347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.077 [2024-07-15 22:37:28.375068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.078 [2024-07-15 22:37:28.375125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.078 [2024-07-15 22:37:28.375141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.078 [2024-07-15 22:37:28.375155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.078 [2024-07-15 22:37:28.375166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.078 [2024-07-15 22:37:28.375252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:45.078 [2024-07-15 22:37:28.375310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:45.078 [2024-07-15 22:37:28.375363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:45.078 [2024-07-15 22:37:28.375366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # return 0 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.078 [2024-07-15 22:37:28.538800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.078 Malloc0 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:45.078 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.338 [2024-07-15 22:37:28.592748] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:45.338 { 00:13:45.338 "params": { 00:13:45.338 "name": "Nvme$subsystem", 00:13:45.338 "trtype": "$TEST_TRANSPORT", 00:13:45.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:45.338 "adrfam": "ipv4", 00:13:45.338 "trsvcid": "$NVMF_PORT", 00:13:45.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:45.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:45.338 "hdgst": ${hdgst:-false}, 00:13:45.338 "ddgst": ${ddgst:-false} 00:13:45.338 }, 00:13:45.338 "method": "bdev_nvme_attach_controller" 00:13:45.338 } 00:13:45.338 EOF 00:13:45.338 )") 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:45.338 22:37:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:45.338 "params": { 00:13:45.338 "name": "Nvme1", 00:13:45.338 "trtype": "tcp", 00:13:45.338 "traddr": "10.0.0.2", 00:13:45.338 "adrfam": "ipv4", 00:13:45.338 "trsvcid": "4420", 00:13:45.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.338 "hdgst": false, 00:13:45.338 "ddgst": false 00:13:45.338 }, 00:13:45.338 "method": "bdev_nvme_attach_controller" 00:13:45.338 }' 00:13:45.338 [2024-07-15 22:37:28.640127] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:13:45.338 [2024-07-15 22:37:28.640214] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246171 ] 00:13:45.338 [2024-07-15 22:37:28.700748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.338 [2024-07-15 22:37:28.813344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.338 [2024-07-15 22:37:28.813371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.338 [2024-07-15 22:37:28.813375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.596 I/O targets: 00:13:45.596 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:45.596 00:13:45.596 00:13:45.596 CUnit - A unit testing framework for C - Version 2.1-3 00:13:45.596 http://cunit.sourceforge.net/ 00:13:45.596 00:13:45.596 00:13:45.596 Suite: bdevio tests on: Nvme1n1 00:13:45.596 Test: blockdev write read block ...passed 00:13:45.596 Test: blockdev write zeroes read block ...passed 00:13:45.596 Test: blockdev write zeroes read no split ...passed 00:13:45.854 Test: blockdev write zeroes read split ...passed 00:13:45.854 Test: blockdev write zeroes read split partial ...passed 00:13:45.854 Test: blockdev reset ...[2024-07-15 22:37:29.203796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:45.854 [2024-07-15 22:37:29.203905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9e580 (9): Bad file descriptor 00:13:45.854 [2024-07-15 22:37:29.214758] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:45.854 passed 00:13:45.854 Test: blockdev write read 8 blocks ...passed 00:13:45.854 Test: blockdev write read size > 128k ...passed 00:13:45.854 Test: blockdev write read invalid size ...passed 00:13:45.854 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:45.854 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:45.854 Test: blockdev write read max offset ...passed 00:13:45.854 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:46.113 Test: blockdev writev readv 8 blocks ...passed 00:13:46.113 Test: blockdev writev readv 30 x 1block ...passed 00:13:46.113 Test: blockdev writev readv block ...passed 00:13:46.113 Test: blockdev writev readv size > 128k ...passed 00:13:46.113 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:46.113 Test: blockdev comparev and writev ...[2024-07-15 22:37:29.475942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.113 [2024-07-15 22:37:29.475977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:46.113 [2024-07-15 22:37:29.476001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.113 [2024-07-15 22:37:29.476018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:46.113 [2024-07-15 22:37:29.476402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.113 [2024-07-15 22:37:29.476428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:46.113 [2024-07-15 22:37:29.476450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.113 [2024-07-15 22:37:29.476466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:46.113 [2024-07-15 22:37:29.476857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.113 [2024-07-15 22:37:29.476890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:46.113 [2024-07-15 22:37:29.476914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.113 [2024-07-15 22:37:29.476929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:46.113 [2024-07-15 22:37:29.477314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.113 [2024-07-15 22:37:29.477340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:46.114 [2024-07-15 22:37:29.477361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.114 [2024-07-15 22:37:29.477377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:46.114 passed 00:13:46.114 Test: blockdev nvme passthru rw ...passed 00:13:46.114 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:37:29.561270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.114 [2024-07-15 22:37:29.561299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:46.114 [2024-07-15 22:37:29.561523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.114 [2024-07-15 22:37:29.561547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:46.114 [2024-07-15 22:37:29.561770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.114 [2024-07-15 22:37:29.561795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:46.114 [2024-07-15 22:37:29.562020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.114 [2024-07-15 22:37:29.562046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:46.114 passed 00:13:46.114 Test: blockdev nvme admin passthru ...passed 00:13:46.372 Test: blockdev copy ...passed 00:13:46.372 00:13:46.372 Run Summary: Type Total Ran Passed Failed Inactive 00:13:46.372 suites 1 1 n/a 0 0 00:13:46.372 tests 23 23 23 0 0 00:13:46.372 asserts 152 152 152 0 n/a 00:13:46.372 00:13:46.372 Elapsed time = 1.241 seconds 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.372 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.631 rmmod nvme_tcp 00:13:46.631 rmmod nvme_fabrics 00:13:46.631 rmmod nvme_keyring 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1246147 ']' 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1246147 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@942 -- # '[' -z 1246147 ']' 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # kill -0 1246147 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # uname 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1246147 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # process_name=reactor_3 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' reactor_3 = sudo ']' 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1246147' 00:13:46.631 killing process with pid 1246147 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@961 -- # kill 1246147 00:13:46.631 22:37:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # wait 1246147 00:13:46.890 22:37:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.890 22:37:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.890 22:37:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.890 22:37:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.890 22:37:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.890 22:37:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.890 22:37:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.890 22:37:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.787 22:37:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:48.787 00:13:48.787 real 0m6.410s 00:13:48.787 user 0m10.310s 00:13:48.787 sys 0m2.074s 00:13:48.787 22:37:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:48.787 22:37:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:48.787 ************************************ 00:13:48.787 END TEST nvmf_bdevio 00:13:48.787 ************************************ 00:13:49.047 22:37:32 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:49.047 22:37:32 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:49.047 22:37:32 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:49.047 22:37:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:49.047 22:37:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.047 ************************************ 00:13:49.047 START TEST nvmf_auth_target 00:13:49.047 ************************************ 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:49.047 * Looking for test storage... 00:13:49.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.047 22:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:51.013 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:51.013 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.013 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:51.014 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:51.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:13:51.014 00:13:51.014 --- 10.0.0.2 ping statistics --- 00:13:51.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.014 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:13:51.014 00:13:51.014 --- 10.0.0.1 ping statistics --- 00:13:51.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.014 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1248240 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1248240 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1248240 ']' 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:51.014 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1248332 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:51.272 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b9c7056d10ff78ed7f4bdb80ad783ba9ff10d0e850f9fe5a 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QZe 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b9c7056d10ff78ed7f4bdb80ad783ba9ff10d0e850f9fe5a 0 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b9c7056d10ff78ed7f4bdb80ad783ba9ff10d0e850f9fe5a 0 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b9c7056d10ff78ed7f4bdb80ad783ba9ff10d0e850f9fe5a 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.273 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QZe 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QZe 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.QZe 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f2c69a8675e67986527ed8569e688841a3a86f1cff43bf15138b4c968c7ddc05 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.f0C 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f2c69a8675e67986527ed8569e688841a3a86f1cff43bf15138b4c968c7ddc05 3 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f2c69a8675e67986527ed8569e688841a3a86f1cff43bf15138b4c968c7ddc05 3 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f2c69a8675e67986527ed8569e688841a3a86f1cff43bf15138b4c968c7ddc05 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.f0C 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.f0C 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.f0C 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=be503facf2cd70350cdb8ef588de3126 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MXe 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key be503facf2cd70350cdb8ef588de3126 1 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 be503facf2cd70350cdb8ef588de3126 1 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=be503facf2cd70350cdb8ef588de3126 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MXe 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MXe 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.MXe 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=efc169e0328a185ed00ef6f9ad1ee069e4b2fa0d9bc496f3 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GlI 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key efc169e0328a185ed00ef6f9ad1ee069e4b2fa0d9bc496f3 2 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 efc169e0328a185ed00ef6f9ad1ee069e4b2fa0d9bc496f3 2 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=efc169e0328a185ed00ef6f9ad1ee069e4b2fa0d9bc496f3 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GlI 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GlI 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.GlI 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=898c2fd3124519c700ec96750a2f8db39b2a96b45deb89fd 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vRW 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 898c2fd3124519c700ec96750a2f8db39b2a96b45deb89fd 2 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 898c2fd3124519c700ec96750a2f8db39b2a96b45deb89fd 2 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=898c2fd3124519c700ec96750a2f8db39b2a96b45deb89fd 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.531 22:37:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vRW 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vRW 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.vRW 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=32dc1c4c094a64cede91bbe0167e2399 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.WgZ 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 32dc1c4c094a64cede91bbe0167e2399 1 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 32dc1c4c094a64cede91bbe0167e2399 1 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=32dc1c4c094a64cede91bbe0167e2399 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:51.531 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.WgZ 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.WgZ 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.WgZ 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=80c5f09044d63be7489f24a757f31c6ba63bd8b3eab07f0cc1f72a5e2affc161 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.P3d 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 80c5f09044d63be7489f24a757f31c6ba63bd8b3eab07f0cc1f72a5e2affc161 3 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 80c5f09044d63be7489f24a757f31c6ba63bd8b3eab07f0cc1f72a5e2affc161 3 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=80c5f09044d63be7489f24a757f31c6ba63bd8b3eab07f0cc1f72a5e2affc161 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.P3d 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.P3d 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.P3d 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1248240 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1248240 ']' 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.789 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:51.790 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.790 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:51.790 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.047 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:52.047 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:13:52.047 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1248332 /var/tmp/host.sock 00:13:52.047 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1248332 ']' 00:13:52.047 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/host.sock 00:13:52.047 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:52.047 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:52.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:52.047 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:52.047 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QZe 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.QZe 00:13:52.305 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.QZe 00:13:52.562 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.f0C ]] 00:13:52.562 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.f0C 00:13:52.562 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.562 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.562 22:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.562 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.f0C 00:13:52.562 22:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.f0C 00:13:52.821 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:52.821 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MXe 00:13:52.821 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.821 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.821 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.821 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.MXe 00:13:52.821 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.MXe 00:13:53.078 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.GlI ]] 00:13:53.078 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GlI 00:13:53.078 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:53.078 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.078 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:53.078 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GlI 00:13:53.078 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GlI 00:13:53.336 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:53.336 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vRW 00:13:53.336 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:53.336 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.336 22:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:53.336 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.vRW 00:13:53.336 22:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.vRW 00:13:53.594 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.WgZ ]] 00:13:53.594 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WgZ 00:13:53.594 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:53.594 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.851 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:53.851 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WgZ 00:13:53.851 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WgZ 00:13:53.851 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:53.851 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.P3d 00:13:53.851 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:53.851 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.851 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:53.851 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.P3d 00:13:53.851 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.P3d 00:13:54.108 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:54.108 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:54.108 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.108 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.108 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.108 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.366 22:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.930 00:13:54.930 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.930 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.930 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.930 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.930 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.930 22:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:54.930 22:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.930 22:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:54.930 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:54.930 { 00:13:54.930 "cntlid": 1, 00:13:54.930 "qid": 0, 00:13:54.930 "state": "enabled", 00:13:54.930 "thread": "nvmf_tgt_poll_group_000", 00:13:54.930 "listen_address": { 00:13:54.930 "trtype": "TCP", 00:13:54.930 "adrfam": "IPv4", 00:13:54.930 "traddr": "10.0.0.2", 00:13:54.930 "trsvcid": "4420" 00:13:54.930 }, 00:13:54.930 "peer_address": { 00:13:54.930 "trtype": "TCP", 00:13:54.930 "adrfam": "IPv4", 00:13:54.930 "traddr": "10.0.0.1", 00:13:54.930 "trsvcid": "43844" 00:13:54.930 }, 00:13:54.930 "auth": { 00:13:54.930 "state": "completed", 00:13:54.930 "digest": "sha256", 00:13:54.930 "dhgroup": "null" 00:13:54.930 } 00:13:54.930 } 00:13:54.930 ]' 00:13:54.930 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.187 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.187 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.187 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:55.187 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.187 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.187 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.187 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.443 22:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:13:56.374 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.374 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.374 22:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:56.374 22:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.374 22:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:56.374 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.374 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:56.374 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.632 22:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.889 00:13:56.889 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.889 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.889 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.146 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.147 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.147 22:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:57.147 22:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.147 22:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:57.147 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.147 { 00:13:57.147 "cntlid": 3, 00:13:57.147 "qid": 0, 00:13:57.147 "state": "enabled", 00:13:57.147 "thread": "nvmf_tgt_poll_group_000", 00:13:57.147 "listen_address": { 00:13:57.147 "trtype": "TCP", 00:13:57.147 "adrfam": "IPv4", 00:13:57.147 "traddr": "10.0.0.2", 00:13:57.147 "trsvcid": "4420" 00:13:57.147 }, 00:13:57.147 "peer_address": { 00:13:57.147 "trtype": "TCP", 00:13:57.147 "adrfam": "IPv4", 00:13:57.147 "traddr": "10.0.0.1", 00:13:57.147 "trsvcid": "43866" 00:13:57.147 }, 00:13:57.147 "auth": { 00:13:57.147 "state": "completed", 00:13:57.147 "digest": "sha256", 00:13:57.147 "dhgroup": "null" 00:13:57.147 } 00:13:57.147 } 00:13:57.147 ]' 00:13:57.147 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.147 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.147 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.147 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:57.147 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.404 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.404 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.404 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.662 22:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:13:58.594 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.594 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.594 22:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:58.594 22:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.594 22:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:58.594 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.594 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.594 22:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.851 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.109 00:13:59.109 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.109 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.109 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.367 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.367 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.367 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:59.367 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.367 22:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:59.367 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.367 { 00:13:59.367 "cntlid": 5, 00:13:59.367 "qid": 0, 00:13:59.367 "state": "enabled", 00:13:59.367 "thread": "nvmf_tgt_poll_group_000", 00:13:59.367 "listen_address": { 00:13:59.367 "trtype": "TCP", 00:13:59.367 "adrfam": "IPv4", 00:13:59.367 "traddr": "10.0.0.2", 00:13:59.367 "trsvcid": "4420" 00:13:59.367 }, 00:13:59.367 "peer_address": { 00:13:59.367 "trtype": "TCP", 00:13:59.367 "adrfam": "IPv4", 00:13:59.367 "traddr": "10.0.0.1", 00:13:59.367 "trsvcid": "55294" 00:13:59.367 }, 00:13:59.367 "auth": { 00:13:59.367 "state": "completed", 00:13:59.367 "digest": "sha256", 00:13:59.367 "dhgroup": "null" 00:13:59.367 } 00:13:59.367 } 00:13:59.367 ]' 00:13:59.367 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.624 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.624 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.624 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:59.624 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.624 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.624 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.624 22:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.881 22:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:14:00.812 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.812 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.812 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:00.812 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.812 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:00.812 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.812 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.812 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:01.070 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:01.328 00:14:01.328 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.328 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.328 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.586 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.586 22:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.586 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:01.586 22:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.586 22:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:01.586 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.586 { 00:14:01.586 "cntlid": 7, 00:14:01.586 "qid": 0, 00:14:01.586 "state": "enabled", 00:14:01.586 "thread": "nvmf_tgt_poll_group_000", 00:14:01.586 "listen_address": { 00:14:01.586 "trtype": "TCP", 00:14:01.586 "adrfam": "IPv4", 00:14:01.586 "traddr": "10.0.0.2", 00:14:01.586 "trsvcid": "4420" 00:14:01.586 }, 00:14:01.586 "peer_address": { 00:14:01.586 "trtype": "TCP", 00:14:01.586 "adrfam": "IPv4", 00:14:01.586 "traddr": "10.0.0.1", 00:14:01.586 "trsvcid": "55330" 00:14:01.586 }, 00:14:01.586 "auth": { 00:14:01.586 "state": "completed", 00:14:01.586 "digest": "sha256", 00:14:01.586 "dhgroup": "null" 00:14:01.586 } 00:14:01.586 } 00:14:01.586 ]' 00:14:01.586 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.586 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.586 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.586 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:01.586 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.844 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.844 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.844 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.101 22:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:14:03.033 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.033 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.033 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:03.033 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.033 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:03.033 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.033 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.033 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.033 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.291 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.549 00:14:03.549 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.549 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.549 22:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.814 { 00:14:03.814 "cntlid": 9, 00:14:03.814 "qid": 0, 00:14:03.814 "state": "enabled", 00:14:03.814 "thread": "nvmf_tgt_poll_group_000", 00:14:03.814 "listen_address": { 00:14:03.814 "trtype": "TCP", 00:14:03.814 "adrfam": "IPv4", 00:14:03.814 "traddr": "10.0.0.2", 00:14:03.814 "trsvcid": "4420" 00:14:03.814 }, 00:14:03.814 "peer_address": { 00:14:03.814 "trtype": "TCP", 00:14:03.814 "adrfam": "IPv4", 00:14:03.814 "traddr": "10.0.0.1", 00:14:03.814 "trsvcid": "55352" 00:14:03.814 }, 00:14:03.814 "auth": { 00:14:03.814 "state": "completed", 00:14:03.814 "digest": "sha256", 00:14:03.814 "dhgroup": "ffdhe2048" 00:14:03.814 } 00:14:03.814 } 00:14:03.814 ]' 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.814 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.116 22:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:14:05.047 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.047 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.047 22:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:05.047 22:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.047 22:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:05.047 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.047 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:05.047 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.303 22:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.866 00:14:05.866 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.866 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.866 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.124 { 00:14:06.124 "cntlid": 11, 00:14:06.124 "qid": 0, 00:14:06.124 "state": "enabled", 00:14:06.124 "thread": "nvmf_tgt_poll_group_000", 00:14:06.124 "listen_address": { 00:14:06.124 "trtype": "TCP", 00:14:06.124 "adrfam": "IPv4", 00:14:06.124 "traddr": "10.0.0.2", 00:14:06.124 "trsvcid": "4420" 00:14:06.124 }, 00:14:06.124 "peer_address": { 00:14:06.124 "trtype": "TCP", 00:14:06.124 "adrfam": "IPv4", 00:14:06.124 "traddr": "10.0.0.1", 00:14:06.124 "trsvcid": "55372" 00:14:06.124 }, 00:14:06.124 "auth": { 00:14:06.124 "state": "completed", 00:14:06.124 "digest": "sha256", 00:14:06.124 "dhgroup": "ffdhe2048" 00:14:06.124 } 00:14:06.124 } 00:14:06.124 ]' 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.124 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.381 22:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:14:07.315 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.315 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.315 22:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:07.315 22:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.315 22:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:07.315 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.315 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.315 22:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.573 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.139 00:14:08.139 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.139 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.139 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.397 { 00:14:08.397 "cntlid": 13, 00:14:08.397 "qid": 0, 00:14:08.397 "state": "enabled", 00:14:08.397 "thread": "nvmf_tgt_poll_group_000", 00:14:08.397 "listen_address": { 00:14:08.397 "trtype": "TCP", 00:14:08.397 "adrfam": "IPv4", 00:14:08.397 "traddr": "10.0.0.2", 00:14:08.397 "trsvcid": "4420" 00:14:08.397 }, 00:14:08.397 "peer_address": { 00:14:08.397 "trtype": "TCP", 00:14:08.397 "adrfam": "IPv4", 00:14:08.397 "traddr": "10.0.0.1", 00:14:08.397 "trsvcid": "52666" 00:14:08.397 }, 00:14:08.397 "auth": { 00:14:08.397 "state": "completed", 00:14:08.397 "digest": "sha256", 00:14:08.397 "dhgroup": "ffdhe2048" 00:14:08.397 } 00:14:08.397 } 00:14:08.397 ]' 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.397 22:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.655 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:14:09.589 22:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.589 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.589 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:09.589 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.589 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:09.589 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.589 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.589 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:09.847 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:10.103 00:14:10.359 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.359 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.360 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.360 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.360 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.360 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:10.360 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.617 22:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:10.617 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.617 { 00:14:10.617 "cntlid": 15, 00:14:10.617 "qid": 0, 00:14:10.617 "state": "enabled", 00:14:10.617 "thread": "nvmf_tgt_poll_group_000", 00:14:10.617 "listen_address": { 00:14:10.617 "trtype": "TCP", 00:14:10.617 "adrfam": "IPv4", 00:14:10.617 "traddr": "10.0.0.2", 00:14:10.617 "trsvcid": "4420" 00:14:10.617 }, 00:14:10.617 "peer_address": { 00:14:10.617 "trtype": "TCP", 00:14:10.617 "adrfam": "IPv4", 00:14:10.617 "traddr": "10.0.0.1", 00:14:10.617 "trsvcid": "52694" 00:14:10.617 }, 00:14:10.617 "auth": { 00:14:10.617 "state": "completed", 00:14:10.617 "digest": "sha256", 00:14:10.617 "dhgroup": "ffdhe2048" 00:14:10.617 } 00:14:10.617 } 00:14:10.617 ]' 00:14:10.617 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.617 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.617 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.617 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:10.617 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.617 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.617 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.617 22:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.875 22:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:14:11.809 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.809 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.809 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:11.809 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.809 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:11.809 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.809 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.809 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:11.809 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.066 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.323 00:14:12.323 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.323 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.323 22:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.580 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.580 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.580 22:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:12.580 22:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.581 22:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:12.581 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.581 { 00:14:12.581 "cntlid": 17, 00:14:12.581 "qid": 0, 00:14:12.581 "state": "enabled", 00:14:12.581 "thread": "nvmf_tgt_poll_group_000", 00:14:12.581 "listen_address": { 00:14:12.581 "trtype": "TCP", 00:14:12.581 "adrfam": "IPv4", 00:14:12.581 "traddr": "10.0.0.2", 00:14:12.581 "trsvcid": "4420" 00:14:12.581 }, 00:14:12.581 "peer_address": { 00:14:12.581 "trtype": "TCP", 00:14:12.581 "adrfam": "IPv4", 00:14:12.581 "traddr": "10.0.0.1", 00:14:12.581 "trsvcid": "52712" 00:14:12.581 }, 00:14:12.581 "auth": { 00:14:12.581 "state": "completed", 00:14:12.581 "digest": "sha256", 00:14:12.581 "dhgroup": "ffdhe3072" 00:14:12.581 } 00:14:12.581 } 00:14:12.581 ]' 00:14:12.581 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.837 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.837 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.837 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:12.837 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.837 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.837 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.837 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.095 22:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:14:14.029 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.029 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.029 22:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:14.029 22:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.029 22:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:14.029 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.029 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.029 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:14.286 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.287 22:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.851 00:14:14.851 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.851 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.851 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.851 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.851 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.851 22:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:14.851 22:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.851 22:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:14.851 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.851 { 00:14:14.851 "cntlid": 19, 00:14:14.851 "qid": 0, 00:14:14.851 "state": "enabled", 00:14:14.851 "thread": "nvmf_tgt_poll_group_000", 00:14:14.851 "listen_address": { 00:14:14.851 "trtype": "TCP", 00:14:14.851 "adrfam": "IPv4", 00:14:14.851 "traddr": "10.0.0.2", 00:14:14.851 "trsvcid": "4420" 00:14:14.851 }, 00:14:14.851 "peer_address": { 00:14:14.851 "trtype": "TCP", 00:14:14.851 "adrfam": "IPv4", 00:14:14.851 "traddr": "10.0.0.1", 00:14:14.851 "trsvcid": "52750" 00:14:14.851 }, 00:14:14.851 "auth": { 00:14:14.851 "state": "completed", 00:14:14.851 "digest": "sha256", 00:14:14.851 "dhgroup": "ffdhe3072" 00:14:14.851 } 00:14:14.851 } 00:14:14.851 ]' 00:14:14.851 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.108 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.108 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.108 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.108 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.108 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.108 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.108 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.366 22:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:14:16.299 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.299 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.299 22:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:16.299 22:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.299 22:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:16.299 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.299 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:16.299 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.557 22:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.815 00:14:16.815 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.815 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.815 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.073 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.073 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.073 22:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:17.073 22:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.073 22:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:17.073 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.073 { 00:14:17.073 "cntlid": 21, 00:14:17.073 "qid": 0, 00:14:17.073 "state": "enabled", 00:14:17.073 "thread": "nvmf_tgt_poll_group_000", 00:14:17.073 "listen_address": { 00:14:17.073 "trtype": "TCP", 00:14:17.073 "adrfam": "IPv4", 00:14:17.073 "traddr": "10.0.0.2", 00:14:17.073 "trsvcid": "4420" 00:14:17.073 }, 00:14:17.073 "peer_address": { 00:14:17.073 "trtype": "TCP", 00:14:17.073 "adrfam": "IPv4", 00:14:17.073 "traddr": "10.0.0.1", 00:14:17.073 "trsvcid": "52772" 00:14:17.073 }, 00:14:17.073 "auth": { 00:14:17.073 "state": "completed", 00:14:17.073 "digest": "sha256", 00:14:17.073 "dhgroup": "ffdhe3072" 00:14:17.073 } 00:14:17.073 } 00:14:17.073 ]' 00:14:17.073 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.330 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.330 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.330 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.330 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.330 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.330 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.330 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.641 22:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:14:18.583 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.583 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.583 22:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:18.583 22:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.583 22:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:18.583 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.583 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:18.584 22:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.841 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:19.099 00:14:19.099 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.099 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.099 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.357 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.357 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.357 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:19.357 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.357 22:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:19.357 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.357 { 00:14:19.357 "cntlid": 23, 00:14:19.357 "qid": 0, 00:14:19.357 "state": "enabled", 00:14:19.357 "thread": "nvmf_tgt_poll_group_000", 00:14:19.357 "listen_address": { 00:14:19.357 "trtype": "TCP", 00:14:19.357 "adrfam": "IPv4", 00:14:19.357 "traddr": "10.0.0.2", 00:14:19.357 "trsvcid": "4420" 00:14:19.357 }, 00:14:19.357 "peer_address": { 00:14:19.357 "trtype": "TCP", 00:14:19.357 "adrfam": "IPv4", 00:14:19.357 "traddr": "10.0.0.1", 00:14:19.357 "trsvcid": "50700" 00:14:19.357 }, 00:14:19.357 "auth": { 00:14:19.357 "state": "completed", 00:14:19.357 "digest": "sha256", 00:14:19.357 "dhgroup": "ffdhe3072" 00:14:19.357 } 00:14:19.357 } 00:14:19.357 ]' 00:14:19.357 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.357 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.357 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.615 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:19.615 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.615 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.615 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.615 22:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.875 22:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:14:20.809 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.809 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.809 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:20.809 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.809 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:20.809 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:20.809 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.809 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:20.809 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.067 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.325 00:14:21.325 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.325 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.325 22:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.583 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.583 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.583 22:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:21.583 22:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.583 22:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:21.583 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.583 { 00:14:21.583 "cntlid": 25, 00:14:21.583 "qid": 0, 00:14:21.583 "state": "enabled", 00:14:21.583 "thread": "nvmf_tgt_poll_group_000", 00:14:21.583 "listen_address": { 00:14:21.583 "trtype": "TCP", 00:14:21.583 "adrfam": "IPv4", 00:14:21.583 "traddr": "10.0.0.2", 00:14:21.583 "trsvcid": "4420" 00:14:21.583 }, 00:14:21.583 "peer_address": { 00:14:21.583 "trtype": "TCP", 00:14:21.583 "adrfam": "IPv4", 00:14:21.583 "traddr": "10.0.0.1", 00:14:21.583 "trsvcid": "50728" 00:14:21.583 }, 00:14:21.583 "auth": { 00:14:21.583 "state": "completed", 00:14:21.583 "digest": "sha256", 00:14:21.583 "dhgroup": "ffdhe4096" 00:14:21.583 } 00:14:21.583 } 00:14:21.583 ]' 00:14:21.583 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.841 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.841 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.841 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.841 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.841 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.841 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.841 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.099 22:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:14:23.032 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.032 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.032 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:23.032 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.032 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:23.032 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.032 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:23.032 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.290 22:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.854 00:14:23.854 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.855 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.855 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.855 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.855 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.855 22:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:23.855 22:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.855 22:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:23.855 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.855 { 00:14:23.855 "cntlid": 27, 00:14:23.855 "qid": 0, 00:14:23.855 "state": "enabled", 00:14:23.855 "thread": "nvmf_tgt_poll_group_000", 00:14:23.855 "listen_address": { 00:14:23.855 "trtype": "TCP", 00:14:23.855 "adrfam": "IPv4", 00:14:23.855 "traddr": "10.0.0.2", 00:14:23.855 "trsvcid": "4420" 00:14:23.855 }, 00:14:23.855 "peer_address": { 00:14:23.855 "trtype": "TCP", 00:14:23.855 "adrfam": "IPv4", 00:14:23.855 "traddr": "10.0.0.1", 00:14:23.855 "trsvcid": "50752" 00:14:23.855 }, 00:14:23.855 "auth": { 00:14:23.855 "state": "completed", 00:14:23.855 "digest": "sha256", 00:14:23.855 "dhgroup": "ffdhe4096" 00:14:23.855 } 00:14:23.855 } 00:14:23.855 ]' 00:14:23.855 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.112 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.112 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.112 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:24.112 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.112 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.112 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.112 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.368 22:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:14:25.298 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.298 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.299 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:25.299 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.299 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:25.299 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.299 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.299 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.556 22:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.813 00:14:25.813 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.813 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.813 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.070 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.070 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.070 22:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:26.070 22:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.070 22:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:26.070 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.070 { 00:14:26.070 "cntlid": 29, 00:14:26.070 "qid": 0, 00:14:26.070 "state": "enabled", 00:14:26.070 "thread": "nvmf_tgt_poll_group_000", 00:14:26.070 "listen_address": { 00:14:26.070 "trtype": "TCP", 00:14:26.070 "adrfam": "IPv4", 00:14:26.070 "traddr": "10.0.0.2", 00:14:26.071 "trsvcid": "4420" 00:14:26.071 }, 00:14:26.071 "peer_address": { 00:14:26.071 "trtype": "TCP", 00:14:26.071 "adrfam": "IPv4", 00:14:26.071 "traddr": "10.0.0.1", 00:14:26.071 "trsvcid": "50780" 00:14:26.071 }, 00:14:26.071 "auth": { 00:14:26.071 "state": "completed", 00:14:26.071 "digest": "sha256", 00:14:26.071 "dhgroup": "ffdhe4096" 00:14:26.071 } 00:14:26.071 } 00:14:26.071 ]' 00:14:26.071 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.329 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.329 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.329 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.329 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.329 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.329 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.329 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.587 22:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:14:27.519 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.519 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.519 22:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:27.519 22:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.519 22:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:27.519 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.519 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.519 22:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:27.777 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:27.778 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.341 00:14:28.341 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.341 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.341 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.341 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.341 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.341 22:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:28.341 22:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.341 22:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:28.341 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.341 { 00:14:28.341 "cntlid": 31, 00:14:28.341 "qid": 0, 00:14:28.341 "state": "enabled", 00:14:28.341 "thread": "nvmf_tgt_poll_group_000", 00:14:28.341 "listen_address": { 00:14:28.341 "trtype": "TCP", 00:14:28.341 "adrfam": "IPv4", 00:14:28.341 "traddr": "10.0.0.2", 00:14:28.341 "trsvcid": "4420" 00:14:28.341 }, 00:14:28.341 "peer_address": { 00:14:28.341 "trtype": "TCP", 00:14:28.341 "adrfam": "IPv4", 00:14:28.341 "traddr": "10.0.0.1", 00:14:28.341 "trsvcid": "37646" 00:14:28.341 }, 00:14:28.341 "auth": { 00:14:28.341 "state": "completed", 00:14:28.341 "digest": "sha256", 00:14:28.341 "dhgroup": "ffdhe4096" 00:14:28.341 } 00:14:28.341 } 00:14:28.341 ]' 00:14:28.341 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.597 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.597 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.597 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:28.597 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.597 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.597 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.597 22:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.853 22:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:14:29.785 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.785 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.785 22:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:29.785 22:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.785 22:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:29.785 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.785 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.785 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:29.785 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.042 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.606 00:14:30.606 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.606 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.606 22:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.863 { 00:14:30.863 "cntlid": 33, 00:14:30.863 "qid": 0, 00:14:30.863 "state": "enabled", 00:14:30.863 "thread": "nvmf_tgt_poll_group_000", 00:14:30.863 "listen_address": { 00:14:30.863 "trtype": "TCP", 00:14:30.863 "adrfam": "IPv4", 00:14:30.863 "traddr": "10.0.0.2", 00:14:30.863 "trsvcid": "4420" 00:14:30.863 }, 00:14:30.863 "peer_address": { 00:14:30.863 "trtype": "TCP", 00:14:30.863 "adrfam": "IPv4", 00:14:30.863 "traddr": "10.0.0.1", 00:14:30.863 "trsvcid": "37668" 00:14:30.863 }, 00:14:30.863 "auth": { 00:14:30.863 "state": "completed", 00:14:30.863 "digest": "sha256", 00:14:30.863 "dhgroup": "ffdhe6144" 00:14:30.863 } 00:14:30.863 } 00:14:30.863 ]' 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.863 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.120 22:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:14:32.111 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.111 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:32.111 22:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:32.111 22:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.111 22:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:32.111 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.111 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:32.111 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.675 22:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.932 00:14:33.189 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.189 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.189 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.189 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.189 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.189 22:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:33.189 22:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.446 22:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:33.446 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.446 { 00:14:33.446 "cntlid": 35, 00:14:33.446 "qid": 0, 00:14:33.446 "state": "enabled", 00:14:33.446 "thread": "nvmf_tgt_poll_group_000", 00:14:33.446 "listen_address": { 00:14:33.446 "trtype": "TCP", 00:14:33.446 "adrfam": "IPv4", 00:14:33.447 "traddr": "10.0.0.2", 00:14:33.447 "trsvcid": "4420" 00:14:33.447 }, 00:14:33.447 "peer_address": { 00:14:33.447 "trtype": "TCP", 00:14:33.447 "adrfam": "IPv4", 00:14:33.447 "traddr": "10.0.0.1", 00:14:33.447 "trsvcid": "37708" 00:14:33.447 }, 00:14:33.447 "auth": { 00:14:33.447 "state": "completed", 00:14:33.447 "digest": "sha256", 00:14:33.447 "dhgroup": "ffdhe6144" 00:14:33.447 } 00:14:33.447 } 00:14:33.447 ]' 00:14:33.447 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.447 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.447 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.447 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:33.447 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.447 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.447 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.447 22:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.704 22:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:14:34.637 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.637 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.637 22:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:34.637 22:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.637 22:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:34.637 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.637 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.638 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.895 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.462 00:14:35.462 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.462 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.462 22:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.721 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.721 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.721 22:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:35.721 22:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.721 22:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:35.721 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.721 { 00:14:35.721 "cntlid": 37, 00:14:35.721 "qid": 0, 00:14:35.721 "state": "enabled", 00:14:35.721 "thread": "nvmf_tgt_poll_group_000", 00:14:35.721 "listen_address": { 00:14:35.721 "trtype": "TCP", 00:14:35.721 "adrfam": "IPv4", 00:14:35.721 "traddr": "10.0.0.2", 00:14:35.721 "trsvcid": "4420" 00:14:35.721 }, 00:14:35.721 "peer_address": { 00:14:35.721 "trtype": "TCP", 00:14:35.721 "adrfam": "IPv4", 00:14:35.721 "traddr": "10.0.0.1", 00:14:35.721 "trsvcid": "37742" 00:14:35.721 }, 00:14:35.721 "auth": { 00:14:35.721 "state": "completed", 00:14:35.721 "digest": "sha256", 00:14:35.721 "dhgroup": "ffdhe6144" 00:14:35.721 } 00:14:35.721 } 00:14:35.721 ]' 00:14:35.721 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.721 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.721 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.979 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:35.979 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.979 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.979 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.979 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.238 22:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:14:37.173 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.173 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.173 22:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:37.173 22:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.173 22:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:37.173 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.173 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:37.173 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.431 22:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.995 00:14:37.995 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.995 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.995 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.253 { 00:14:38.253 "cntlid": 39, 00:14:38.253 "qid": 0, 00:14:38.253 "state": "enabled", 00:14:38.253 "thread": "nvmf_tgt_poll_group_000", 00:14:38.253 "listen_address": { 00:14:38.253 "trtype": "TCP", 00:14:38.253 "adrfam": "IPv4", 00:14:38.253 "traddr": "10.0.0.2", 00:14:38.253 "trsvcid": "4420" 00:14:38.253 }, 00:14:38.253 "peer_address": { 00:14:38.253 "trtype": "TCP", 00:14:38.253 "adrfam": "IPv4", 00:14:38.253 "traddr": "10.0.0.1", 00:14:38.253 "trsvcid": "37768" 00:14:38.253 }, 00:14:38.253 "auth": { 00:14:38.253 "state": "completed", 00:14:38.253 "digest": "sha256", 00:14:38.253 "dhgroup": "ffdhe6144" 00:14:38.253 } 00:14:38.253 } 00:14:38.253 ]' 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.253 22:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.510 22:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:14:39.906 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.907 22:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.837 00:14:40.837 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.837 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.837 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.095 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.095 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.095 22:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:41.095 22:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.095 22:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:41.095 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.095 { 00:14:41.095 "cntlid": 41, 00:14:41.095 "qid": 0, 00:14:41.095 "state": "enabled", 00:14:41.095 "thread": "nvmf_tgt_poll_group_000", 00:14:41.095 "listen_address": { 00:14:41.095 "trtype": "TCP", 00:14:41.095 "adrfam": "IPv4", 00:14:41.095 "traddr": "10.0.0.2", 00:14:41.095 "trsvcid": "4420" 00:14:41.095 }, 00:14:41.095 "peer_address": { 00:14:41.095 "trtype": "TCP", 00:14:41.095 "adrfam": "IPv4", 00:14:41.095 "traddr": "10.0.0.1", 00:14:41.095 "trsvcid": "53958" 00:14:41.095 }, 00:14:41.095 "auth": { 00:14:41.095 "state": "completed", 00:14:41.095 "digest": "sha256", 00:14:41.095 "dhgroup": "ffdhe8192" 00:14:41.095 } 00:14:41.095 } 00:14:41.095 ]' 00:14:41.095 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.095 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.095 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.352 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:41.352 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.352 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.352 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.352 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.609 22:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:14:42.541 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.541 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.541 22:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.541 22:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.541 22:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.541 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.541 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:42.541 22:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.799 22:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.733 00:14:43.733 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.733 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.733 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.990 { 00:14:43.990 "cntlid": 43, 00:14:43.990 "qid": 0, 00:14:43.990 "state": "enabled", 00:14:43.990 "thread": "nvmf_tgt_poll_group_000", 00:14:43.990 "listen_address": { 00:14:43.990 "trtype": "TCP", 00:14:43.990 "adrfam": "IPv4", 00:14:43.990 "traddr": "10.0.0.2", 00:14:43.990 "trsvcid": "4420" 00:14:43.990 }, 00:14:43.990 "peer_address": { 00:14:43.990 "trtype": "TCP", 00:14:43.990 "adrfam": "IPv4", 00:14:43.990 "traddr": "10.0.0.1", 00:14:43.990 "trsvcid": "53982" 00:14:43.990 }, 00:14:43.990 "auth": { 00:14:43.990 "state": "completed", 00:14:43.990 "digest": "sha256", 00:14:43.990 "dhgroup": "ffdhe8192" 00:14:43.990 } 00:14:43.990 } 00:14:43.990 ]' 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.990 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.991 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.991 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.248 22:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.619 22:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.597 00:14:46.598 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.598 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.598 22:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.854 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.854 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.854 22:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:46.854 22:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.854 22:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:46.854 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.854 { 00:14:46.854 "cntlid": 45, 00:14:46.854 "qid": 0, 00:14:46.854 "state": "enabled", 00:14:46.855 "thread": "nvmf_tgt_poll_group_000", 00:14:46.855 "listen_address": { 00:14:46.855 "trtype": "TCP", 00:14:46.855 "adrfam": "IPv4", 00:14:46.855 "traddr": "10.0.0.2", 00:14:46.855 "trsvcid": "4420" 00:14:46.855 }, 00:14:46.855 "peer_address": { 00:14:46.855 "trtype": "TCP", 00:14:46.855 "adrfam": "IPv4", 00:14:46.855 "traddr": "10.0.0.1", 00:14:46.855 "trsvcid": "54002" 00:14:46.855 }, 00:14:46.855 "auth": { 00:14:46.855 "state": "completed", 00:14:46.855 "digest": "sha256", 00:14:46.855 "dhgroup": "ffdhe8192" 00:14:46.855 } 00:14:46.855 } 00:14:46.855 ]' 00:14:46.855 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.855 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.855 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.855 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:46.855 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.855 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.855 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.855 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.112 22:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:14:48.047 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.047 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.047 22:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:48.047 22:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.047 22:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:48.047 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.047 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:48.047 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.305 22:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.238 00:14:49.238 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.238 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.238 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.496 { 00:14:49.496 "cntlid": 47, 00:14:49.496 "qid": 0, 00:14:49.496 "state": "enabled", 00:14:49.496 "thread": "nvmf_tgt_poll_group_000", 00:14:49.496 "listen_address": { 00:14:49.496 "trtype": "TCP", 00:14:49.496 "adrfam": "IPv4", 00:14:49.496 "traddr": "10.0.0.2", 00:14:49.496 "trsvcid": "4420" 00:14:49.496 }, 00:14:49.496 "peer_address": { 00:14:49.496 "trtype": "TCP", 00:14:49.496 "adrfam": "IPv4", 00:14:49.496 "traddr": "10.0.0.1", 00:14:49.496 "trsvcid": "47076" 00:14:49.496 }, 00:14:49.496 "auth": { 00:14:49.496 "state": "completed", 00:14:49.496 "digest": "sha256", 00:14:49.496 "dhgroup": "ffdhe8192" 00:14:49.496 } 00:14:49.496 } 00:14:49.496 ]' 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:49.496 22:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.754 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.754 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.754 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.011 22:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:14:50.943 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.943 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.943 22:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:50.943 22:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.943 22:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:50.943 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:50.943 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.943 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.943 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:50.943 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.200 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.459 00:14:51.459 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.459 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.459 22:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.717 { 00:14:51.717 "cntlid": 49, 00:14:51.717 "qid": 0, 00:14:51.717 "state": "enabled", 00:14:51.717 "thread": "nvmf_tgt_poll_group_000", 00:14:51.717 "listen_address": { 00:14:51.717 "trtype": "TCP", 00:14:51.717 "adrfam": "IPv4", 00:14:51.717 "traddr": "10.0.0.2", 00:14:51.717 "trsvcid": "4420" 00:14:51.717 }, 00:14:51.717 "peer_address": { 00:14:51.717 "trtype": "TCP", 00:14:51.717 "adrfam": "IPv4", 00:14:51.717 "traddr": "10.0.0.1", 00:14:51.717 "trsvcid": "47104" 00:14:51.717 }, 00:14:51.717 "auth": { 00:14:51.717 "state": "completed", 00:14:51.717 "digest": "sha384", 00:14:51.717 "dhgroup": "null" 00:14:51.717 } 00:14:51.717 } 00:14:51.717 ]' 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.717 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.975 22:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:14:53.347 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.348 22:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.913 00:14:53.913 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.913 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.913 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.913 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.913 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.913 22:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:53.913 22:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.913 22:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:53.913 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.913 { 00:14:53.913 "cntlid": 51, 00:14:53.913 "qid": 0, 00:14:53.913 "state": "enabled", 00:14:53.913 "thread": "nvmf_tgt_poll_group_000", 00:14:53.913 "listen_address": { 00:14:53.913 "trtype": "TCP", 00:14:53.913 "adrfam": "IPv4", 00:14:53.913 "traddr": "10.0.0.2", 00:14:53.913 "trsvcid": "4420" 00:14:53.913 }, 00:14:53.913 "peer_address": { 00:14:53.913 "trtype": "TCP", 00:14:53.913 "adrfam": "IPv4", 00:14:53.913 "traddr": "10.0.0.1", 00:14:53.913 "trsvcid": "47130" 00:14:53.913 }, 00:14:53.913 "auth": { 00:14:53.913 "state": "completed", 00:14:53.913 "digest": "sha384", 00:14:53.913 "dhgroup": "null" 00:14:53.913 } 00:14:53.913 } 00:14:53.914 ]' 00:14:53.914 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.171 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.171 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.171 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:54.171 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.171 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.171 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.171 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.429 22:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:14:55.418 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.418 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.418 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:55.418 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.418 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:55.418 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.418 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:55.418 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.676 22:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.934 00:14:55.934 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.934 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.934 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.191 { 00:14:56.191 "cntlid": 53, 00:14:56.191 "qid": 0, 00:14:56.191 "state": "enabled", 00:14:56.191 "thread": "nvmf_tgt_poll_group_000", 00:14:56.191 "listen_address": { 00:14:56.191 "trtype": "TCP", 00:14:56.191 "adrfam": "IPv4", 00:14:56.191 "traddr": "10.0.0.2", 00:14:56.191 "trsvcid": "4420" 00:14:56.191 }, 00:14:56.191 "peer_address": { 00:14:56.191 "trtype": "TCP", 00:14:56.191 "adrfam": "IPv4", 00:14:56.191 "traddr": "10.0.0.1", 00:14:56.191 "trsvcid": "47154" 00:14:56.191 }, 00:14:56.191 "auth": { 00:14:56.191 "state": "completed", 00:14:56.191 "digest": "sha384", 00:14:56.191 "dhgroup": "null" 00:14:56.191 } 00:14:56.191 } 00:14:56.191 ]' 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:56.191 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.448 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.448 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.448 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.705 22:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:14:57.635 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.635 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.635 22:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:57.635 22:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.635 22:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:57.635 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.635 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.635 22:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.893 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.149 00:14:58.149 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.149 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.149 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.406 { 00:14:58.406 "cntlid": 55, 00:14:58.406 "qid": 0, 00:14:58.406 "state": "enabled", 00:14:58.406 "thread": "nvmf_tgt_poll_group_000", 00:14:58.406 "listen_address": { 00:14:58.406 "trtype": "TCP", 00:14:58.406 "adrfam": "IPv4", 00:14:58.406 "traddr": "10.0.0.2", 00:14:58.406 "trsvcid": "4420" 00:14:58.406 }, 00:14:58.406 "peer_address": { 00:14:58.406 "trtype": "TCP", 00:14:58.406 "adrfam": "IPv4", 00:14:58.406 "traddr": "10.0.0.1", 00:14:58.406 "trsvcid": "58102" 00:14:58.406 }, 00:14:58.406 "auth": { 00:14:58.406 "state": "completed", 00:14:58.406 "digest": "sha384", 00:14:58.406 "dhgroup": "null" 00:14:58.406 } 00:14:58.406 } 00:14:58.406 ]' 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.406 22:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.663 22:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:14:59.595 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.851 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.851 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:59.851 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.851 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:59.851 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.851 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.851 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:59.851 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.107 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.364 00:15:00.364 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.365 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.365 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.623 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.623 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.623 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:00.623 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.623 22:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:00.623 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.623 { 00:15:00.623 "cntlid": 57, 00:15:00.623 "qid": 0, 00:15:00.623 "state": "enabled", 00:15:00.623 "thread": "nvmf_tgt_poll_group_000", 00:15:00.623 "listen_address": { 00:15:00.623 "trtype": "TCP", 00:15:00.623 "adrfam": "IPv4", 00:15:00.623 "traddr": "10.0.0.2", 00:15:00.623 "trsvcid": "4420" 00:15:00.623 }, 00:15:00.623 "peer_address": { 00:15:00.623 "trtype": "TCP", 00:15:00.623 "adrfam": "IPv4", 00:15:00.623 "traddr": "10.0.0.1", 00:15:00.623 "trsvcid": "58120" 00:15:00.623 }, 00:15:00.623 "auth": { 00:15:00.623 "state": "completed", 00:15:00.623 "digest": "sha384", 00:15:00.623 "dhgroup": "ffdhe2048" 00:15:00.623 } 00:15:00.623 } 00:15:00.623 ]' 00:15:00.623 22:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.623 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.623 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.623 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:00.623 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.623 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.623 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.623 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.187 22:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:15:02.119 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.119 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.119 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:02.119 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.119 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:02.119 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.119 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.119 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.376 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.633 00:15:02.633 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.633 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.633 22:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.890 { 00:15:02.890 "cntlid": 59, 00:15:02.890 "qid": 0, 00:15:02.890 "state": "enabled", 00:15:02.890 "thread": "nvmf_tgt_poll_group_000", 00:15:02.890 "listen_address": { 00:15:02.890 "trtype": "TCP", 00:15:02.890 "adrfam": "IPv4", 00:15:02.890 "traddr": "10.0.0.2", 00:15:02.890 "trsvcid": "4420" 00:15:02.890 }, 00:15:02.890 "peer_address": { 00:15:02.890 "trtype": "TCP", 00:15:02.890 "adrfam": "IPv4", 00:15:02.890 "traddr": "10.0.0.1", 00:15:02.890 "trsvcid": "58142" 00:15:02.890 }, 00:15:02.890 "auth": { 00:15:02.890 "state": "completed", 00:15:02.890 "digest": "sha384", 00:15:02.890 "dhgroup": "ffdhe2048" 00:15:02.890 } 00:15:02.890 } 00:15:02.890 ]' 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.890 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.148 22:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.519 22:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.777 00:15:04.777 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.777 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.777 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.034 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.034 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.034 22:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:05.034 22:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.034 22:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:05.034 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.034 { 00:15:05.034 "cntlid": 61, 00:15:05.034 "qid": 0, 00:15:05.034 "state": "enabled", 00:15:05.034 "thread": "nvmf_tgt_poll_group_000", 00:15:05.034 "listen_address": { 00:15:05.034 "trtype": "TCP", 00:15:05.034 "adrfam": "IPv4", 00:15:05.034 "traddr": "10.0.0.2", 00:15:05.034 "trsvcid": "4420" 00:15:05.034 }, 00:15:05.034 "peer_address": { 00:15:05.034 "trtype": "TCP", 00:15:05.034 "adrfam": "IPv4", 00:15:05.034 "traddr": "10.0.0.1", 00:15:05.034 "trsvcid": "58182" 00:15:05.034 }, 00:15:05.034 "auth": { 00:15:05.034 "state": "completed", 00:15:05.034 "digest": "sha384", 00:15:05.034 "dhgroup": "ffdhe2048" 00:15:05.034 } 00:15:05.034 } 00:15:05.034 ]' 00:15:05.034 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.034 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.034 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.291 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:05.291 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.291 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.291 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.291 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.549 22:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:15:06.482 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.482 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.482 22:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:06.482 22:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.482 22:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:06.482 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.482 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:06.482 22:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:06.740 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:06.740 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.740 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.740 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:06.740 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.740 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.740 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:06.740 22:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:06.740 22:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.740 22:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:06.741 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.741 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.998 00:15:06.998 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.998 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.999 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.256 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.256 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.256 22:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:07.256 22:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.256 22:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:07.256 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.256 { 00:15:07.256 "cntlid": 63, 00:15:07.256 "qid": 0, 00:15:07.256 "state": "enabled", 00:15:07.256 "thread": "nvmf_tgt_poll_group_000", 00:15:07.256 "listen_address": { 00:15:07.256 "trtype": "TCP", 00:15:07.256 "adrfam": "IPv4", 00:15:07.256 "traddr": "10.0.0.2", 00:15:07.256 "trsvcid": "4420" 00:15:07.256 }, 00:15:07.256 "peer_address": { 00:15:07.256 "trtype": "TCP", 00:15:07.256 "adrfam": "IPv4", 00:15:07.256 "traddr": "10.0.0.1", 00:15:07.256 "trsvcid": "58210" 00:15:07.256 }, 00:15:07.256 "auth": { 00:15:07.256 "state": "completed", 00:15:07.256 "digest": "sha384", 00:15:07.256 "dhgroup": "ffdhe2048" 00:15:07.256 } 00:15:07.256 } 00:15:07.256 ]' 00:15:07.256 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.514 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.514 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.514 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.514 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.514 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.514 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.514 22:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.772 22:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:15:08.713 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.713 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.713 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:08.713 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.713 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:08.713 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.713 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.713 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:08.713 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.970 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.536 00:15:09.536 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.536 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.536 22:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.537 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.537 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.537 22:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:09.537 22:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 22:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:09.794 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.794 { 00:15:09.794 "cntlid": 65, 00:15:09.794 "qid": 0, 00:15:09.794 "state": "enabled", 00:15:09.794 "thread": "nvmf_tgt_poll_group_000", 00:15:09.794 "listen_address": { 00:15:09.794 "trtype": "TCP", 00:15:09.794 "adrfam": "IPv4", 00:15:09.794 "traddr": "10.0.0.2", 00:15:09.794 "trsvcid": "4420" 00:15:09.794 }, 00:15:09.794 "peer_address": { 00:15:09.794 "trtype": "TCP", 00:15:09.794 "adrfam": "IPv4", 00:15:09.794 "traddr": "10.0.0.1", 00:15:09.794 "trsvcid": "49118" 00:15:09.794 }, 00:15:09.794 "auth": { 00:15:09.794 "state": "completed", 00:15:09.794 "digest": "sha384", 00:15:09.794 "dhgroup": "ffdhe3072" 00:15:09.794 } 00:15:09.794 } 00:15:09.794 ]' 00:15:09.794 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.794 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.794 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.794 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:09.794 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.794 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.794 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.794 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.051 22:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:15:10.982 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.982 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.982 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:10.982 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.982 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:10.982 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.982 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.982 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.546 22:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.803 00:15:11.803 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.803 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.803 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.061 { 00:15:12.061 "cntlid": 67, 00:15:12.061 "qid": 0, 00:15:12.061 "state": "enabled", 00:15:12.061 "thread": "nvmf_tgt_poll_group_000", 00:15:12.061 "listen_address": { 00:15:12.061 "trtype": "TCP", 00:15:12.061 "adrfam": "IPv4", 00:15:12.061 "traddr": "10.0.0.2", 00:15:12.061 "trsvcid": "4420" 00:15:12.061 }, 00:15:12.061 "peer_address": { 00:15:12.061 "trtype": "TCP", 00:15:12.061 "adrfam": "IPv4", 00:15:12.061 "traddr": "10.0.0.1", 00:15:12.061 "trsvcid": "49140" 00:15:12.061 }, 00:15:12.061 "auth": { 00:15:12.061 "state": "completed", 00:15:12.061 "digest": "sha384", 00:15:12.061 "dhgroup": "ffdhe3072" 00:15:12.061 } 00:15:12.061 } 00:15:12.061 ]' 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.061 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.317 22:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:15:13.252 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.511 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.511 22:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:13.511 22:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.511 22:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:13.512 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.512 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.512 22:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.770 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:13.770 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.770 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.770 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:13.770 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.770 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.770 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.770 22:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:13.770 22:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.771 22:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:13.771 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.771 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.029 00:15:14.029 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.029 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.029 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.287 { 00:15:14.287 "cntlid": 69, 00:15:14.287 "qid": 0, 00:15:14.287 "state": "enabled", 00:15:14.287 "thread": "nvmf_tgt_poll_group_000", 00:15:14.287 "listen_address": { 00:15:14.287 "trtype": "TCP", 00:15:14.287 "adrfam": "IPv4", 00:15:14.287 "traddr": "10.0.0.2", 00:15:14.287 "trsvcid": "4420" 00:15:14.287 }, 00:15:14.287 "peer_address": { 00:15:14.287 "trtype": "TCP", 00:15:14.287 "adrfam": "IPv4", 00:15:14.287 "traddr": "10.0.0.1", 00:15:14.287 "trsvcid": "49170" 00:15:14.287 }, 00:15:14.287 "auth": { 00:15:14.287 "state": "completed", 00:15:14.287 "digest": "sha384", 00:15:14.287 "dhgroup": "ffdhe3072" 00:15:14.287 } 00:15:14.287 } 00:15:14.287 ]' 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.287 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.572 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.572 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.572 22:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.831 22:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:15:15.766 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.766 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.766 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:15.766 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.766 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:15.766 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.766 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.766 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.024 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.281 00:15:16.281 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.281 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.281 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.540 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.540 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.540 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:16.540 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.540 22:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:16.540 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.540 { 00:15:16.540 "cntlid": 71, 00:15:16.540 "qid": 0, 00:15:16.540 "state": "enabled", 00:15:16.540 "thread": "nvmf_tgt_poll_group_000", 00:15:16.540 "listen_address": { 00:15:16.540 "trtype": "TCP", 00:15:16.540 "adrfam": "IPv4", 00:15:16.540 "traddr": "10.0.0.2", 00:15:16.540 "trsvcid": "4420" 00:15:16.540 }, 00:15:16.540 "peer_address": { 00:15:16.540 "trtype": "TCP", 00:15:16.540 "adrfam": "IPv4", 00:15:16.540 "traddr": "10.0.0.1", 00:15:16.540 "trsvcid": "49194" 00:15:16.540 }, 00:15:16.540 "auth": { 00:15:16.540 "state": "completed", 00:15:16.540 "digest": "sha384", 00:15:16.540 "dhgroup": "ffdhe3072" 00:15:16.540 } 00:15:16.540 } 00:15:16.540 ]' 00:15:16.540 22:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.540 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.540 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.798 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.798 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.798 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.798 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.798 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.057 22:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:15:18.002 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.002 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.002 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:18.002 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.002 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:18.002 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.002 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.002 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.002 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.260 22:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.829 00:15:18.829 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.829 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.829 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.829 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.829 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.829 22:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:18.829 22:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.087 22:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:19.087 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.087 { 00:15:19.087 "cntlid": 73, 00:15:19.087 "qid": 0, 00:15:19.087 "state": "enabled", 00:15:19.087 "thread": "nvmf_tgt_poll_group_000", 00:15:19.087 "listen_address": { 00:15:19.087 "trtype": "TCP", 00:15:19.087 "adrfam": "IPv4", 00:15:19.087 "traddr": "10.0.0.2", 00:15:19.087 "trsvcid": "4420" 00:15:19.087 }, 00:15:19.087 "peer_address": { 00:15:19.087 "trtype": "TCP", 00:15:19.087 "adrfam": "IPv4", 00:15:19.087 "traddr": "10.0.0.1", 00:15:19.087 "trsvcid": "45816" 00:15:19.087 }, 00:15:19.087 "auth": { 00:15:19.087 "state": "completed", 00:15:19.087 "digest": "sha384", 00:15:19.087 "dhgroup": "ffdhe4096" 00:15:19.087 } 00:15:19.087 } 00:15:19.087 ]' 00:15:19.087 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.087 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.087 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.087 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.087 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.087 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.087 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.087 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.345 22:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:15:20.280 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.280 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.280 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:20.280 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.280 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:20.280 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.280 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.280 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.537 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:20.537 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.538 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.538 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:20.538 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:20.538 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.538 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.538 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:20.538 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.538 22:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:20.538 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.538 22:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.105 00:15:21.105 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.105 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.105 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.364 { 00:15:21.364 "cntlid": 75, 00:15:21.364 "qid": 0, 00:15:21.364 "state": "enabled", 00:15:21.364 "thread": "nvmf_tgt_poll_group_000", 00:15:21.364 "listen_address": { 00:15:21.364 "trtype": "TCP", 00:15:21.364 "adrfam": "IPv4", 00:15:21.364 "traddr": "10.0.0.2", 00:15:21.364 "trsvcid": "4420" 00:15:21.364 }, 00:15:21.364 "peer_address": { 00:15:21.364 "trtype": "TCP", 00:15:21.364 "adrfam": "IPv4", 00:15:21.364 "traddr": "10.0.0.1", 00:15:21.364 "trsvcid": "45844" 00:15:21.364 }, 00:15:21.364 "auth": { 00:15:21.364 "state": "completed", 00:15:21.364 "digest": "sha384", 00:15:21.364 "dhgroup": "ffdhe4096" 00:15:21.364 } 00:15:21.364 } 00:15:21.364 ]' 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.364 22:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.624 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:15:22.557 22:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.557 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.557 22:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:22.557 22:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.557 22:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:22.557 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.557 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.557 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.815 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.382 00:15:23.382 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.382 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.382 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.640 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.640 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.640 22:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:23.640 22:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.640 22:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:23.640 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.640 { 00:15:23.640 "cntlid": 77, 00:15:23.640 "qid": 0, 00:15:23.640 "state": "enabled", 00:15:23.640 "thread": "nvmf_tgt_poll_group_000", 00:15:23.640 "listen_address": { 00:15:23.640 "trtype": "TCP", 00:15:23.640 "adrfam": "IPv4", 00:15:23.640 "traddr": "10.0.0.2", 00:15:23.640 "trsvcid": "4420" 00:15:23.640 }, 00:15:23.640 "peer_address": { 00:15:23.640 "trtype": "TCP", 00:15:23.640 "adrfam": "IPv4", 00:15:23.640 "traddr": "10.0.0.1", 00:15:23.640 "trsvcid": "45866" 00:15:23.640 }, 00:15:23.640 "auth": { 00:15:23.640 "state": "completed", 00:15:23.640 "digest": "sha384", 00:15:23.640 "dhgroup": "ffdhe4096" 00:15:23.640 } 00:15:23.640 } 00:15:23.640 ]' 00:15:23.640 22:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.640 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.640 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.640 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:23.640 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.640 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.640 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.640 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.898 22:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:15:24.834 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.834 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.834 22:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:24.834 22:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.834 22:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:24.834 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.835 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.835 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.400 22:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.657 00:15:25.657 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.657 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.657 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.913 { 00:15:25.913 "cntlid": 79, 00:15:25.913 "qid": 0, 00:15:25.913 "state": "enabled", 00:15:25.913 "thread": "nvmf_tgt_poll_group_000", 00:15:25.913 "listen_address": { 00:15:25.913 "trtype": "TCP", 00:15:25.913 "adrfam": "IPv4", 00:15:25.913 "traddr": "10.0.0.2", 00:15:25.913 "trsvcid": "4420" 00:15:25.913 }, 00:15:25.913 "peer_address": { 00:15:25.913 "trtype": "TCP", 00:15:25.913 "adrfam": "IPv4", 00:15:25.913 "traddr": "10.0.0.1", 00:15:25.913 "trsvcid": "45902" 00:15:25.913 }, 00:15:25.913 "auth": { 00:15:25.913 "state": "completed", 00:15:25.913 "digest": "sha384", 00:15:25.913 "dhgroup": "ffdhe4096" 00:15:25.913 } 00:15:25.913 } 00:15:25.913 ]' 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.913 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.171 22:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:15:27.122 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.122 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.122 22:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:27.122 22:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.122 22:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:27.122 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.122 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.122 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.122 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.378 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:27.378 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.379 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.379 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:27.379 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:27.379 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.379 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.379 22:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:27.379 22:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.379 22:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:27.379 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.379 22:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.964 00:15:27.964 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.964 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.964 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.221 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.221 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.221 22:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:28.221 22:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.221 22:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:28.221 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.221 { 00:15:28.221 "cntlid": 81, 00:15:28.221 "qid": 0, 00:15:28.221 "state": "enabled", 00:15:28.221 "thread": "nvmf_tgt_poll_group_000", 00:15:28.221 "listen_address": { 00:15:28.221 "trtype": "TCP", 00:15:28.221 "adrfam": "IPv4", 00:15:28.221 "traddr": "10.0.0.2", 00:15:28.221 "trsvcid": "4420" 00:15:28.221 }, 00:15:28.221 "peer_address": { 00:15:28.221 "trtype": "TCP", 00:15:28.221 "adrfam": "IPv4", 00:15:28.221 "traddr": "10.0.0.1", 00:15:28.221 "trsvcid": "55878" 00:15:28.221 }, 00:15:28.221 "auth": { 00:15:28.221 "state": "completed", 00:15:28.221 "digest": "sha384", 00:15:28.221 "dhgroup": "ffdhe6144" 00:15:28.221 } 00:15:28.221 } 00:15:28.221 ]' 00:15:28.221 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.221 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.221 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.478 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.478 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.478 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.478 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.478 22:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.737 22:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:15:29.698 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.698 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.698 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:29.698 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.698 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:29.698 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.698 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:29.698 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.964 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.533 00:15:30.533 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.533 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.533 22:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.791 { 00:15:30.791 "cntlid": 83, 00:15:30.791 "qid": 0, 00:15:30.791 "state": "enabled", 00:15:30.791 "thread": "nvmf_tgt_poll_group_000", 00:15:30.791 "listen_address": { 00:15:30.791 "trtype": "TCP", 00:15:30.791 "adrfam": "IPv4", 00:15:30.791 "traddr": "10.0.0.2", 00:15:30.791 "trsvcid": "4420" 00:15:30.791 }, 00:15:30.791 "peer_address": { 00:15:30.791 "trtype": "TCP", 00:15:30.791 "adrfam": "IPv4", 00:15:30.791 "traddr": "10.0.0.1", 00:15:30.791 "trsvcid": "55906" 00:15:30.791 }, 00:15:30.791 "auth": { 00:15:30.791 "state": "completed", 00:15:30.791 "digest": "sha384", 00:15:30.791 "dhgroup": "ffdhe6144" 00:15:30.791 } 00:15:30.791 } 00:15:30.791 ]' 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.791 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.050 22:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:15:31.984 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.984 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.984 22:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:31.984 22:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.984 22:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:31.984 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.984 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.984 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.242 22:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.809 00:15:32.809 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.809 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.810 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.067 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.067 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.067 22:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:33.067 22:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.067 22:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:33.067 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.067 { 00:15:33.067 "cntlid": 85, 00:15:33.067 "qid": 0, 00:15:33.067 "state": "enabled", 00:15:33.067 "thread": "nvmf_tgt_poll_group_000", 00:15:33.067 "listen_address": { 00:15:33.067 "trtype": "TCP", 00:15:33.067 "adrfam": "IPv4", 00:15:33.067 "traddr": "10.0.0.2", 00:15:33.067 "trsvcid": "4420" 00:15:33.067 }, 00:15:33.067 "peer_address": { 00:15:33.067 "trtype": "TCP", 00:15:33.067 "adrfam": "IPv4", 00:15:33.067 "traddr": "10.0.0.1", 00:15:33.067 "trsvcid": "55922" 00:15:33.067 }, 00:15:33.067 "auth": { 00:15:33.067 "state": "completed", 00:15:33.067 "digest": "sha384", 00:15:33.067 "dhgroup": "ffdhe6144" 00:15:33.067 } 00:15:33.067 } 00:15:33.067 ]' 00:15:33.067 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.067 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.067 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.324 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.324 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.324 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.324 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.324 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.583 22:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:15:34.520 22:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.520 22:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.520 22:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:34.520 22:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.520 22:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:34.520 22:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.520 22:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.520 22:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.778 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:35.345 00:15:35.345 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.345 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.345 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.603 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.603 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.603 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:35.603 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.603 22:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:35.603 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.603 { 00:15:35.603 "cntlid": 87, 00:15:35.603 "qid": 0, 00:15:35.603 "state": "enabled", 00:15:35.603 "thread": "nvmf_tgt_poll_group_000", 00:15:35.603 "listen_address": { 00:15:35.603 "trtype": "TCP", 00:15:35.603 "adrfam": "IPv4", 00:15:35.603 "traddr": "10.0.0.2", 00:15:35.603 "trsvcid": "4420" 00:15:35.603 }, 00:15:35.603 "peer_address": { 00:15:35.603 "trtype": "TCP", 00:15:35.603 "adrfam": "IPv4", 00:15:35.603 "traddr": "10.0.0.1", 00:15:35.603 "trsvcid": "55956" 00:15:35.603 }, 00:15:35.603 "auth": { 00:15:35.603 "state": "completed", 00:15:35.603 "digest": "sha384", 00:15:35.603 "dhgroup": "ffdhe6144" 00:15:35.603 } 00:15:35.603 } 00:15:35.603 ]' 00:15:35.603 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.603 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.603 22:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.603 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.603 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.603 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.603 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.603 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.860 22:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:15:36.794 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.794 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.794 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.794 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.794 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.794 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.794 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.794 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:36.794 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.052 22:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.984 00:15:37.984 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.984 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.984 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.243 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.243 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.243 22:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:38.243 22:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.243 22:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:38.243 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.243 { 00:15:38.243 "cntlid": 89, 00:15:38.243 "qid": 0, 00:15:38.243 "state": "enabled", 00:15:38.243 "thread": "nvmf_tgt_poll_group_000", 00:15:38.243 "listen_address": { 00:15:38.243 "trtype": "TCP", 00:15:38.243 "adrfam": "IPv4", 00:15:38.243 "traddr": "10.0.0.2", 00:15:38.243 "trsvcid": "4420" 00:15:38.243 }, 00:15:38.243 "peer_address": { 00:15:38.243 "trtype": "TCP", 00:15:38.243 "adrfam": "IPv4", 00:15:38.243 "traddr": "10.0.0.1", 00:15:38.243 "trsvcid": "55970" 00:15:38.243 }, 00:15:38.243 "auth": { 00:15:38.243 "state": "completed", 00:15:38.243 "digest": "sha384", 00:15:38.243 "dhgroup": "ffdhe8192" 00:15:38.243 } 00:15:38.243 } 00:15:38.243 ]' 00:15:38.243 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.243 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.243 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.500 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:38.500 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.500 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.500 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.500 22:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.758 22:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:15:39.689 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.689 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.689 22:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:39.689 22:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.689 22:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:39.689 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.689 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:39.689 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.945 22:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.879 00:15:40.879 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.879 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.879 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.135 { 00:15:41.135 "cntlid": 91, 00:15:41.135 "qid": 0, 00:15:41.135 "state": "enabled", 00:15:41.135 "thread": "nvmf_tgt_poll_group_000", 00:15:41.135 "listen_address": { 00:15:41.135 "trtype": "TCP", 00:15:41.135 "adrfam": "IPv4", 00:15:41.135 "traddr": "10.0.0.2", 00:15:41.135 "trsvcid": "4420" 00:15:41.135 }, 00:15:41.135 "peer_address": { 00:15:41.135 "trtype": "TCP", 00:15:41.135 "adrfam": "IPv4", 00:15:41.135 "traddr": "10.0.0.1", 00:15:41.135 "trsvcid": "33062" 00:15:41.135 }, 00:15:41.135 "auth": { 00:15:41.135 "state": "completed", 00:15:41.135 "digest": "sha384", 00:15:41.135 "dhgroup": "ffdhe8192" 00:15:41.135 } 00:15:41.135 } 00:15:41.135 ]' 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.135 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.393 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.393 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.393 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.652 22:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:15:42.587 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.587 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.587 22:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:42.587 22:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.587 22:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:42.587 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.587 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.587 22:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.845 22:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.789 00:15:43.789 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.789 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.789 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.095 { 00:15:44.095 "cntlid": 93, 00:15:44.095 "qid": 0, 00:15:44.095 "state": "enabled", 00:15:44.095 "thread": "nvmf_tgt_poll_group_000", 00:15:44.095 "listen_address": { 00:15:44.095 "trtype": "TCP", 00:15:44.095 "adrfam": "IPv4", 00:15:44.095 "traddr": "10.0.0.2", 00:15:44.095 "trsvcid": "4420" 00:15:44.095 }, 00:15:44.095 "peer_address": { 00:15:44.095 "trtype": "TCP", 00:15:44.095 "adrfam": "IPv4", 00:15:44.095 "traddr": "10.0.0.1", 00:15:44.095 "trsvcid": "33086" 00:15:44.095 }, 00:15:44.095 "auth": { 00:15:44.095 "state": "completed", 00:15:44.095 "digest": "sha384", 00:15:44.095 "dhgroup": "ffdhe8192" 00:15:44.095 } 00:15:44.095 } 00:15:44.095 ]' 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.095 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.352 22:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:15:45.286 22:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.286 22:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.286 22:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:45.286 22:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.286 22:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:45.286 22:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.286 22:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.286 22:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.544 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:45.544 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.544 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.544 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:45.544 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:45.544 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.544 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:45.544 22:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:45.544 22:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.545 22:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:45.545 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.545 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.481 00:15:46.481 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.481 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.481 22:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.740 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.740 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.740 22:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:46.740 22:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.740 22:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:46.740 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.740 { 00:15:46.740 "cntlid": 95, 00:15:46.740 "qid": 0, 00:15:46.740 "state": "enabled", 00:15:46.740 "thread": "nvmf_tgt_poll_group_000", 00:15:46.740 "listen_address": { 00:15:46.740 "trtype": "TCP", 00:15:46.740 "adrfam": "IPv4", 00:15:46.740 "traddr": "10.0.0.2", 00:15:46.740 "trsvcid": "4420" 00:15:46.740 }, 00:15:46.740 "peer_address": { 00:15:46.740 "trtype": "TCP", 00:15:46.740 "adrfam": "IPv4", 00:15:46.740 "traddr": "10.0.0.1", 00:15:46.740 "trsvcid": "33106" 00:15:46.740 }, 00:15:46.740 "auth": { 00:15:46.740 "state": "completed", 00:15:46.740 "digest": "sha384", 00:15:46.740 "dhgroup": "ffdhe8192" 00:15:46.740 } 00:15:46.740 } 00:15:46.740 ]' 00:15:46.740 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.998 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.998 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.998 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.998 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.998 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.999 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.999 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.256 22:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:15:48.194 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.194 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.194 22:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:48.194 22:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.194 22:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:48.194 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:48.194 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.194 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.194 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.194 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.452 22:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.711 00:15:48.711 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.711 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.711 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.968 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.968 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.968 22:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:48.968 22:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.969 22:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:48.969 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.969 { 00:15:48.969 "cntlid": 97, 00:15:48.969 "qid": 0, 00:15:48.969 "state": "enabled", 00:15:48.969 "thread": "nvmf_tgt_poll_group_000", 00:15:48.969 "listen_address": { 00:15:48.969 "trtype": "TCP", 00:15:48.969 "adrfam": "IPv4", 00:15:48.969 "traddr": "10.0.0.2", 00:15:48.969 "trsvcid": "4420" 00:15:48.969 }, 00:15:48.969 "peer_address": { 00:15:48.969 "trtype": "TCP", 00:15:48.969 "adrfam": "IPv4", 00:15:48.969 "traddr": "10.0.0.1", 00:15:48.969 "trsvcid": "60510" 00:15:48.969 }, 00:15:48.969 "auth": { 00:15:48.969 "state": "completed", 00:15:48.969 "digest": "sha512", 00:15:48.969 "dhgroup": "null" 00:15:48.969 } 00:15:48.969 } 00:15:48.969 ]' 00:15:48.969 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.969 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.969 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.226 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:49.226 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.226 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.226 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.226 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.484 22:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:15:50.418 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.418 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.418 22:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:50.418 22:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.418 22:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:50.418 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.418 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.418 22:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.676 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:50.676 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.677 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.677 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:50.677 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:50.677 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.677 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.677 22:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:50.677 22:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.677 22:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:50.677 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.677 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.933 00:15:50.933 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.933 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.933 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.192 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.192 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.192 22:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:51.192 22:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.192 22:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:51.192 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.192 { 00:15:51.192 "cntlid": 99, 00:15:51.192 "qid": 0, 00:15:51.192 "state": "enabled", 00:15:51.192 "thread": "nvmf_tgt_poll_group_000", 00:15:51.192 "listen_address": { 00:15:51.192 "trtype": "TCP", 00:15:51.192 "adrfam": "IPv4", 00:15:51.192 "traddr": "10.0.0.2", 00:15:51.192 "trsvcid": "4420" 00:15:51.192 }, 00:15:51.192 "peer_address": { 00:15:51.192 "trtype": "TCP", 00:15:51.192 "adrfam": "IPv4", 00:15:51.192 "traddr": "10.0.0.1", 00:15:51.192 "trsvcid": "60534" 00:15:51.192 }, 00:15:51.192 "auth": { 00:15:51.192 "state": "completed", 00:15:51.192 "digest": "sha512", 00:15:51.192 "dhgroup": "null" 00:15:51.192 } 00:15:51.192 } 00:15:51.192 ]' 00:15:51.192 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.192 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.192 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.450 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:51.450 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.450 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.450 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.450 22:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.708 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:15:52.645 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.645 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.645 22:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:52.645 22:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.645 22:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:52.645 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.645 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.645 22:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.903 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.161 00:15:53.161 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.161 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.161 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.419 { 00:15:53.419 "cntlid": 101, 00:15:53.419 "qid": 0, 00:15:53.419 "state": "enabled", 00:15:53.419 "thread": "nvmf_tgt_poll_group_000", 00:15:53.419 "listen_address": { 00:15:53.419 "trtype": "TCP", 00:15:53.419 "adrfam": "IPv4", 00:15:53.419 "traddr": "10.0.0.2", 00:15:53.419 "trsvcid": "4420" 00:15:53.419 }, 00:15:53.419 "peer_address": { 00:15:53.419 "trtype": "TCP", 00:15:53.419 "adrfam": "IPv4", 00:15:53.419 "traddr": "10.0.0.1", 00:15:53.419 "trsvcid": "60566" 00:15:53.419 }, 00:15:53.419 "auth": { 00:15:53.419 "state": "completed", 00:15:53.419 "digest": "sha512", 00:15:53.419 "dhgroup": "null" 00:15:53.419 } 00:15:53.419 } 00:15:53.419 ]' 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.419 22:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.679 22:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:15:54.616 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.616 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.616 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:54.616 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:54.875 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.133 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:55.133 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.133 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.391 00:15:55.391 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.391 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.391 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.676 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.676 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.676 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:55.676 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.676 22:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:55.676 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.676 { 00:15:55.676 "cntlid": 103, 00:15:55.676 "qid": 0, 00:15:55.676 "state": "enabled", 00:15:55.676 "thread": "nvmf_tgt_poll_group_000", 00:15:55.676 "listen_address": { 00:15:55.676 "trtype": "TCP", 00:15:55.676 "adrfam": "IPv4", 00:15:55.676 "traddr": "10.0.0.2", 00:15:55.676 "trsvcid": "4420" 00:15:55.676 }, 00:15:55.676 "peer_address": { 00:15:55.676 "trtype": "TCP", 00:15:55.676 "adrfam": "IPv4", 00:15:55.676 "traddr": "10.0.0.1", 00:15:55.676 "trsvcid": "60598" 00:15:55.676 }, 00:15:55.676 "auth": { 00:15:55.676 "state": "completed", 00:15:55.676 "digest": "sha512", 00:15:55.676 "dhgroup": "null" 00:15:55.676 } 00:15:55.676 } 00:15:55.676 ]' 00:15:55.676 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.676 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.676 22:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.676 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:55.676 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.676 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.676 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.676 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.933 22:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:15:56.865 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.865 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.865 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:56.865 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.865 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:56.866 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.866 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.866 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.866 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:57.122 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.123 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.381 00:15:57.639 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.639 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.639 22:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.895 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.896 { 00:15:57.896 "cntlid": 105, 00:15:57.896 "qid": 0, 00:15:57.896 "state": "enabled", 00:15:57.896 "thread": "nvmf_tgt_poll_group_000", 00:15:57.896 "listen_address": { 00:15:57.896 "trtype": "TCP", 00:15:57.896 "adrfam": "IPv4", 00:15:57.896 "traddr": "10.0.0.2", 00:15:57.896 "trsvcid": "4420" 00:15:57.896 }, 00:15:57.896 "peer_address": { 00:15:57.896 "trtype": "TCP", 00:15:57.896 "adrfam": "IPv4", 00:15:57.896 "traddr": "10.0.0.1", 00:15:57.896 "trsvcid": "60624" 00:15:57.896 }, 00:15:57.896 "auth": { 00:15:57.896 "state": "completed", 00:15:57.896 "digest": "sha512", 00:15:57.896 "dhgroup": "ffdhe2048" 00:15:57.896 } 00:15:57.896 } 00:15:57.896 ]' 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.896 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.152 22:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:15:59.124 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.124 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.124 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:59.124 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.124 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:59.124 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.124 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.124 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.382 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:59.382 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.382 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:59.382 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:59.382 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:59.382 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.382 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.382 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:59.382 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.383 22:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:59.383 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.383 22:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.642 00:15:59.901 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.901 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.901 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.901 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.901 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.901 22:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:59.901 22:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.160 22:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:00.160 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.160 { 00:16:00.160 "cntlid": 107, 00:16:00.160 "qid": 0, 00:16:00.160 "state": "enabled", 00:16:00.160 "thread": "nvmf_tgt_poll_group_000", 00:16:00.160 "listen_address": { 00:16:00.160 "trtype": "TCP", 00:16:00.160 "adrfam": "IPv4", 00:16:00.160 "traddr": "10.0.0.2", 00:16:00.160 "trsvcid": "4420" 00:16:00.160 }, 00:16:00.160 "peer_address": { 00:16:00.160 "trtype": "TCP", 00:16:00.160 "adrfam": "IPv4", 00:16:00.160 "traddr": "10.0.0.1", 00:16:00.160 "trsvcid": "51060" 00:16:00.160 }, 00:16:00.160 "auth": { 00:16:00.160 "state": "completed", 00:16:00.160 "digest": "sha512", 00:16:00.160 "dhgroup": "ffdhe2048" 00:16:00.160 } 00:16:00.160 } 00:16:00.160 ]' 00:16:00.160 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.160 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.160 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.160 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.160 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.160 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.160 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.160 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.418 22:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:16:01.357 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.358 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.358 22:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:01.358 22:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.358 22:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:01.358 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.358 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.358 22:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.616 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.185 00:16:02.185 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.185 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.185 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.443 { 00:16:02.443 "cntlid": 109, 00:16:02.443 "qid": 0, 00:16:02.443 "state": "enabled", 00:16:02.443 "thread": "nvmf_tgt_poll_group_000", 00:16:02.443 "listen_address": { 00:16:02.443 "trtype": "TCP", 00:16:02.443 "adrfam": "IPv4", 00:16:02.443 "traddr": "10.0.0.2", 00:16:02.443 "trsvcid": "4420" 00:16:02.443 }, 00:16:02.443 "peer_address": { 00:16:02.443 "trtype": "TCP", 00:16:02.443 "adrfam": "IPv4", 00:16:02.443 "traddr": "10.0.0.1", 00:16:02.443 "trsvcid": "51084" 00:16:02.443 }, 00:16:02.443 "auth": { 00:16:02.443 "state": "completed", 00:16:02.443 "digest": "sha512", 00:16:02.443 "dhgroup": "ffdhe2048" 00:16:02.443 } 00:16:02.443 } 00:16:02.443 ]' 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.443 22:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.701 22:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:16:03.634 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.634 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.634 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:03.634 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.634 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:03.634 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.634 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.634 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.892 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.459 00:16:04.459 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.459 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.459 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.717 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.717 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.717 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:04.717 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.717 22:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:04.717 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.717 { 00:16:04.717 "cntlid": 111, 00:16:04.717 "qid": 0, 00:16:04.717 "state": "enabled", 00:16:04.717 "thread": "nvmf_tgt_poll_group_000", 00:16:04.717 "listen_address": { 00:16:04.717 "trtype": "TCP", 00:16:04.717 "adrfam": "IPv4", 00:16:04.717 "traddr": "10.0.0.2", 00:16:04.717 "trsvcid": "4420" 00:16:04.717 }, 00:16:04.717 "peer_address": { 00:16:04.717 "trtype": "TCP", 00:16:04.717 "adrfam": "IPv4", 00:16:04.717 "traddr": "10.0.0.1", 00:16:04.717 "trsvcid": "51110" 00:16:04.717 }, 00:16:04.717 "auth": { 00:16:04.717 "state": "completed", 00:16:04.717 "digest": "sha512", 00:16:04.717 "dhgroup": "ffdhe2048" 00:16:04.717 } 00:16:04.717 } 00:16:04.717 ]' 00:16:04.717 22:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.717 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.717 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.717 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.717 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.717 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.717 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.717 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.974 22:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:16:05.907 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.907 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.907 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:05.907 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.907 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:05.907 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.907 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.907 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.907 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.166 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.734 00:16:06.734 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.734 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.734 22:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.992 { 00:16:06.992 "cntlid": 113, 00:16:06.992 "qid": 0, 00:16:06.992 "state": "enabled", 00:16:06.992 "thread": "nvmf_tgt_poll_group_000", 00:16:06.992 "listen_address": { 00:16:06.992 "trtype": "TCP", 00:16:06.992 "adrfam": "IPv4", 00:16:06.992 "traddr": "10.0.0.2", 00:16:06.992 "trsvcid": "4420" 00:16:06.992 }, 00:16:06.992 "peer_address": { 00:16:06.992 "trtype": "TCP", 00:16:06.992 "adrfam": "IPv4", 00:16:06.992 "traddr": "10.0.0.1", 00:16:06.992 "trsvcid": "51134" 00:16:06.992 }, 00:16:06.992 "auth": { 00:16:06.992 "state": "completed", 00:16:06.992 "digest": "sha512", 00:16:06.992 "dhgroup": "ffdhe3072" 00:16:06.992 } 00:16:06.992 } 00:16:06.992 ]' 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.992 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.250 22:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:16:08.183 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.183 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.183 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:08.183 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.183 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:08.183 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.183 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.183 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.441 22:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.009 00:16:09.009 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.009 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.009 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.267 { 00:16:09.267 "cntlid": 115, 00:16:09.267 "qid": 0, 00:16:09.267 "state": "enabled", 00:16:09.267 "thread": "nvmf_tgt_poll_group_000", 00:16:09.267 "listen_address": { 00:16:09.267 "trtype": "TCP", 00:16:09.267 "adrfam": "IPv4", 00:16:09.267 "traddr": "10.0.0.2", 00:16:09.267 "trsvcid": "4420" 00:16:09.267 }, 00:16:09.267 "peer_address": { 00:16:09.267 "trtype": "TCP", 00:16:09.267 "adrfam": "IPv4", 00:16:09.267 "traddr": "10.0.0.1", 00:16:09.267 "trsvcid": "47554" 00:16:09.267 }, 00:16:09.267 "auth": { 00:16:09.267 "state": "completed", 00:16:09.267 "digest": "sha512", 00:16:09.267 "dhgroup": "ffdhe3072" 00:16:09.267 } 00:16:09.267 } 00:16:09.267 ]' 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.267 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.525 22:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:16:10.459 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.459 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.459 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:10.459 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.459 22:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:10.459 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.459 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.459 22:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.717 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.282 00:16:11.282 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.282 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.282 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.282 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.282 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.282 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:11.282 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.282 22:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:11.282 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.282 { 00:16:11.282 "cntlid": 117, 00:16:11.282 "qid": 0, 00:16:11.282 "state": "enabled", 00:16:11.282 "thread": "nvmf_tgt_poll_group_000", 00:16:11.282 "listen_address": { 00:16:11.282 "trtype": "TCP", 00:16:11.282 "adrfam": "IPv4", 00:16:11.282 "traddr": "10.0.0.2", 00:16:11.282 "trsvcid": "4420" 00:16:11.282 }, 00:16:11.282 "peer_address": { 00:16:11.282 "trtype": "TCP", 00:16:11.282 "adrfam": "IPv4", 00:16:11.282 "traddr": "10.0.0.1", 00:16:11.282 "trsvcid": "47572" 00:16:11.282 }, 00:16:11.282 "auth": { 00:16:11.282 "state": "completed", 00:16:11.282 "digest": "sha512", 00:16:11.282 "dhgroup": "ffdhe3072" 00:16:11.282 } 00:16:11.282 } 00:16:11.282 ]' 00:16:11.282 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.540 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.540 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.540 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.540 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.540 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.540 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.540 22:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.798 22:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:16:12.732 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.732 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:12.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.732 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:12.732 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.732 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.732 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.989 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.246 00:16:13.246 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.246 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.246 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.504 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.504 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.504 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:13.504 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.504 22:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:13.504 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.504 { 00:16:13.504 "cntlid": 119, 00:16:13.504 "qid": 0, 00:16:13.504 "state": "enabled", 00:16:13.504 "thread": "nvmf_tgt_poll_group_000", 00:16:13.504 "listen_address": { 00:16:13.504 "trtype": "TCP", 00:16:13.504 "adrfam": "IPv4", 00:16:13.504 "traddr": "10.0.0.2", 00:16:13.504 "trsvcid": "4420" 00:16:13.504 }, 00:16:13.504 "peer_address": { 00:16:13.504 "trtype": "TCP", 00:16:13.504 "adrfam": "IPv4", 00:16:13.504 "traddr": "10.0.0.1", 00:16:13.504 "trsvcid": "47612" 00:16:13.504 }, 00:16:13.504 "auth": { 00:16:13.504 "state": "completed", 00:16:13.504 "digest": "sha512", 00:16:13.504 "dhgroup": "ffdhe3072" 00:16:13.504 } 00:16:13.504 } 00:16:13.504 ]' 00:16:13.504 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.504 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.504 22:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.767 22:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.768 22:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.768 22:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.768 22:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.768 22:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.071 22:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.003 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.569 00:16:15.569 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.569 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.569 22:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.827 { 00:16:15.827 "cntlid": 121, 00:16:15.827 "qid": 0, 00:16:15.827 "state": "enabled", 00:16:15.827 "thread": "nvmf_tgt_poll_group_000", 00:16:15.827 "listen_address": { 00:16:15.827 "trtype": "TCP", 00:16:15.827 "adrfam": "IPv4", 00:16:15.827 "traddr": "10.0.0.2", 00:16:15.827 "trsvcid": "4420" 00:16:15.827 }, 00:16:15.827 "peer_address": { 00:16:15.827 "trtype": "TCP", 00:16:15.827 "adrfam": "IPv4", 00:16:15.827 "traddr": "10.0.0.1", 00:16:15.827 "trsvcid": "47644" 00:16:15.827 }, 00:16:15.827 "auth": { 00:16:15.827 "state": "completed", 00:16:15.827 "digest": "sha512", 00:16:15.827 "dhgroup": "ffdhe4096" 00:16:15.827 } 00:16:15.827 } 00:16:15.827 ]' 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.827 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.084 22:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.453 22:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.019 00:16:18.019 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.019 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.019 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.277 { 00:16:18.277 "cntlid": 123, 00:16:18.277 "qid": 0, 00:16:18.277 "state": "enabled", 00:16:18.277 "thread": "nvmf_tgt_poll_group_000", 00:16:18.277 "listen_address": { 00:16:18.277 "trtype": "TCP", 00:16:18.277 "adrfam": "IPv4", 00:16:18.277 "traddr": "10.0.0.2", 00:16:18.277 "trsvcid": "4420" 00:16:18.277 }, 00:16:18.277 "peer_address": { 00:16:18.277 "trtype": "TCP", 00:16:18.277 "adrfam": "IPv4", 00:16:18.277 "traddr": "10.0.0.1", 00:16:18.277 "trsvcid": "57492" 00:16:18.277 }, 00:16:18.277 "auth": { 00:16:18.277 "state": "completed", 00:16:18.277 "digest": "sha512", 00:16:18.277 "dhgroup": "ffdhe4096" 00:16:18.277 } 00:16:18.277 } 00:16:18.277 ]' 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.277 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.534 22:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:16:19.467 22:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.467 22:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.467 22:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:19.467 22:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.467 22:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:19.467 22:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.467 22:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:19.467 22:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:19.725 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:19.725 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.725 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.725 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:19.982 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.982 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.982 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.982 22:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:19.982 22:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.982 22:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:19.982 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.982 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.239 00:16:20.239 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.239 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.239 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.497 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.497 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.497 22:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:20.497 22:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.497 22:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:20.497 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.497 { 00:16:20.497 "cntlid": 125, 00:16:20.497 "qid": 0, 00:16:20.497 "state": "enabled", 00:16:20.497 "thread": "nvmf_tgt_poll_group_000", 00:16:20.497 "listen_address": { 00:16:20.497 "trtype": "TCP", 00:16:20.497 "adrfam": "IPv4", 00:16:20.497 "traddr": "10.0.0.2", 00:16:20.497 "trsvcid": "4420" 00:16:20.497 }, 00:16:20.497 "peer_address": { 00:16:20.497 "trtype": "TCP", 00:16:20.497 "adrfam": "IPv4", 00:16:20.497 "traddr": "10.0.0.1", 00:16:20.497 "trsvcid": "57524" 00:16:20.497 }, 00:16:20.497 "auth": { 00:16:20.497 "state": "completed", 00:16:20.497 "digest": "sha512", 00:16:20.497 "dhgroup": "ffdhe4096" 00:16:20.497 } 00:16:20.497 } 00:16:20.497 ]' 00:16:20.497 22:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.755 22:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.755 22:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.755 22:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:20.755 22:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.755 22:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.755 22:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.755 22:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.013 22:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:16:21.947 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.947 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.947 22:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:21.947 22:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.947 22:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:21.947 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.947 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.947 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.206 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.463 00:16:22.463 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.463 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.463 22:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.721 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.721 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.721 22:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:22.721 22:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.721 22:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:22.721 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.721 { 00:16:22.721 "cntlid": 127, 00:16:22.721 "qid": 0, 00:16:22.721 "state": "enabled", 00:16:22.721 "thread": "nvmf_tgt_poll_group_000", 00:16:22.721 "listen_address": { 00:16:22.721 "trtype": "TCP", 00:16:22.721 "adrfam": "IPv4", 00:16:22.721 "traddr": "10.0.0.2", 00:16:22.721 "trsvcid": "4420" 00:16:22.721 }, 00:16:22.721 "peer_address": { 00:16:22.721 "trtype": "TCP", 00:16:22.721 "adrfam": "IPv4", 00:16:22.721 "traddr": "10.0.0.1", 00:16:22.721 "trsvcid": "57554" 00:16:22.721 }, 00:16:22.721 "auth": { 00:16:22.721 "state": "completed", 00:16:22.721 "digest": "sha512", 00:16:22.721 "dhgroup": "ffdhe4096" 00:16:22.721 } 00:16:22.721 } 00:16:22.721 ]' 00:16:22.721 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.979 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.979 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.979 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.979 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.979 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.979 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.979 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.237 22:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:16:24.172 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.172 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.172 22:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:24.172 22:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 22:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:24.172 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.172 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.172 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:24.172 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.430 22:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.996 00:16:24.996 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.996 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.996 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.255 { 00:16:25.255 "cntlid": 129, 00:16:25.255 "qid": 0, 00:16:25.255 "state": "enabled", 00:16:25.255 "thread": "nvmf_tgt_poll_group_000", 00:16:25.255 "listen_address": { 00:16:25.255 "trtype": "TCP", 00:16:25.255 "adrfam": "IPv4", 00:16:25.255 "traddr": "10.0.0.2", 00:16:25.255 "trsvcid": "4420" 00:16:25.255 }, 00:16:25.255 "peer_address": { 00:16:25.255 "trtype": "TCP", 00:16:25.255 "adrfam": "IPv4", 00:16:25.255 "traddr": "10.0.0.1", 00:16:25.255 "trsvcid": "57582" 00:16:25.255 }, 00:16:25.255 "auth": { 00:16:25.255 "state": "completed", 00:16:25.255 "digest": "sha512", 00:16:25.255 "dhgroup": "ffdhe6144" 00:16:25.255 } 00:16:25.255 } 00:16:25.255 ]' 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.255 22:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.513 22:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:16:26.445 22:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.703 22:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.703 22:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:26.703 22:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.703 22:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:26.703 22:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.703 22:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.703 22:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.961 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.527 00:16:27.527 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.527 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.527 22:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.785 { 00:16:27.785 "cntlid": 131, 00:16:27.785 "qid": 0, 00:16:27.785 "state": "enabled", 00:16:27.785 "thread": "nvmf_tgt_poll_group_000", 00:16:27.785 "listen_address": { 00:16:27.785 "trtype": "TCP", 00:16:27.785 "adrfam": "IPv4", 00:16:27.785 "traddr": "10.0.0.2", 00:16:27.785 "trsvcid": "4420" 00:16:27.785 }, 00:16:27.785 "peer_address": { 00:16:27.785 "trtype": "TCP", 00:16:27.785 "adrfam": "IPv4", 00:16:27.785 "traddr": "10.0.0.1", 00:16:27.785 "trsvcid": "57614" 00:16:27.785 }, 00:16:27.785 "auth": { 00:16:27.785 "state": "completed", 00:16:27.785 "digest": "sha512", 00:16:27.785 "dhgroup": "ffdhe6144" 00:16:27.785 } 00:16:27.785 } 00:16:27.785 ]' 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.785 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.043 22:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:16:28.977 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.977 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.977 22:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.977 22:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.977 22:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.977 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.977 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.977 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.266 22:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.842 00:16:29.842 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.842 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.842 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.102 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.102 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.102 22:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:30.102 22:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.102 22:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:30.102 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.102 { 00:16:30.102 "cntlid": 133, 00:16:30.102 "qid": 0, 00:16:30.102 "state": "enabled", 00:16:30.102 "thread": "nvmf_tgt_poll_group_000", 00:16:30.102 "listen_address": { 00:16:30.102 "trtype": "TCP", 00:16:30.102 "adrfam": "IPv4", 00:16:30.102 "traddr": "10.0.0.2", 00:16:30.102 "trsvcid": "4420" 00:16:30.102 }, 00:16:30.102 "peer_address": { 00:16:30.102 "trtype": "TCP", 00:16:30.102 "adrfam": "IPv4", 00:16:30.102 "traddr": "10.0.0.1", 00:16:30.102 "trsvcid": "33414" 00:16:30.102 }, 00:16:30.102 "auth": { 00:16:30.102 "state": "completed", 00:16:30.102 "digest": "sha512", 00:16:30.102 "dhgroup": "ffdhe6144" 00:16:30.102 } 00:16:30.102 } 00:16:30.102 ]' 00:16:30.102 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.360 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.360 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.360 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.360 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.360 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.360 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.360 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.617 22:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:16:31.551 22:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.551 22:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.551 22:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:31.551 22:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.551 22:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:31.551 22:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.551 22:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.551 22:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.809 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.374 00:16:32.374 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.374 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.374 22:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.631 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.631 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.631 22:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:32.631 22:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.631 22:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:32.631 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.631 { 00:16:32.631 "cntlid": 135, 00:16:32.631 "qid": 0, 00:16:32.631 "state": "enabled", 00:16:32.631 "thread": "nvmf_tgt_poll_group_000", 00:16:32.631 "listen_address": { 00:16:32.631 "trtype": "TCP", 00:16:32.631 "adrfam": "IPv4", 00:16:32.631 "traddr": "10.0.0.2", 00:16:32.631 "trsvcid": "4420" 00:16:32.631 }, 00:16:32.631 "peer_address": { 00:16:32.631 "trtype": "TCP", 00:16:32.631 "adrfam": "IPv4", 00:16:32.631 "traddr": "10.0.0.1", 00:16:32.631 "trsvcid": "33460" 00:16:32.631 }, 00:16:32.631 "auth": { 00:16:32.631 "state": "completed", 00:16:32.631 "digest": "sha512", 00:16:32.631 "dhgroup": "ffdhe6144" 00:16:32.631 } 00:16:32.631 } 00:16:32.631 ]' 00:16:32.631 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.889 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.889 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.889 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.889 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.889 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.889 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.889 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.146 22:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:16:34.078 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.078 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.078 22:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:34.078 22:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.078 22:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:34.078 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.078 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.078 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:34.078 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.335 22:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.266 00:16:35.266 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.266 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.266 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.524 { 00:16:35.524 "cntlid": 137, 00:16:35.524 "qid": 0, 00:16:35.524 "state": "enabled", 00:16:35.524 "thread": "nvmf_tgt_poll_group_000", 00:16:35.524 "listen_address": { 00:16:35.524 "trtype": "TCP", 00:16:35.524 "adrfam": "IPv4", 00:16:35.524 "traddr": "10.0.0.2", 00:16:35.524 "trsvcid": "4420" 00:16:35.524 }, 00:16:35.524 "peer_address": { 00:16:35.524 "trtype": "TCP", 00:16:35.524 "adrfam": "IPv4", 00:16:35.524 "traddr": "10.0.0.1", 00:16:35.524 "trsvcid": "33502" 00:16:35.524 }, 00:16:35.524 "auth": { 00:16:35.524 "state": "completed", 00:16:35.524 "digest": "sha512", 00:16:35.524 "dhgroup": "ffdhe8192" 00:16:35.524 } 00:16:35.524 } 00:16:35.524 ]' 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.524 22:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.781 22:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:16:36.713 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.713 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.713 22:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:36.713 22:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.713 22:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:36.713 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.714 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:36.714 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.970 22:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.901 00:16:37.901 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.157 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.157 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.414 { 00:16:38.414 "cntlid": 139, 00:16:38.414 "qid": 0, 00:16:38.414 "state": "enabled", 00:16:38.414 "thread": "nvmf_tgt_poll_group_000", 00:16:38.414 "listen_address": { 00:16:38.414 "trtype": "TCP", 00:16:38.414 "adrfam": "IPv4", 00:16:38.414 "traddr": "10.0.0.2", 00:16:38.414 "trsvcid": "4420" 00:16:38.414 }, 00:16:38.414 "peer_address": { 00:16:38.414 "trtype": "TCP", 00:16:38.414 "adrfam": "IPv4", 00:16:38.414 "traddr": "10.0.0.1", 00:16:38.414 "trsvcid": "33524" 00:16:38.414 }, 00:16:38.414 "auth": { 00:16:38.414 "state": "completed", 00:16:38.414 "digest": "sha512", 00:16:38.414 "dhgroup": "ffdhe8192" 00:16:38.414 } 00:16:38.414 } 00:16:38.414 ]' 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.414 22:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.671 22:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmU1MDNmYWNmMmNkNzAzNTBjZGI4ZWY1ODhkZTMxMjaqvICO: --dhchap-ctrl-secret DHHC-1:02:ZWZjMTY5ZTAzMjhhMTg1ZWQwMGVmNmY5YWQxZWUwNjllNGIyZmEwZDliYzQ5NmYz8d+Jsw==: 00:16:39.600 22:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.600 22:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.600 22:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:39.600 22:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.600 22:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:39.600 22:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.600 22:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.600 22:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.856 22:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.786 00:16:40.786 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.786 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.786 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.043 { 00:16:41.043 "cntlid": 141, 00:16:41.043 "qid": 0, 00:16:41.043 "state": "enabled", 00:16:41.043 "thread": "nvmf_tgt_poll_group_000", 00:16:41.043 "listen_address": { 00:16:41.043 "trtype": "TCP", 00:16:41.043 "adrfam": "IPv4", 00:16:41.043 "traddr": "10.0.0.2", 00:16:41.043 "trsvcid": "4420" 00:16:41.043 }, 00:16:41.043 "peer_address": { 00:16:41.043 "trtype": "TCP", 00:16:41.043 "adrfam": "IPv4", 00:16:41.043 "traddr": "10.0.0.1", 00:16:41.043 "trsvcid": "40766" 00:16:41.043 }, 00:16:41.043 "auth": { 00:16:41.043 "state": "completed", 00:16:41.043 "digest": "sha512", 00:16:41.043 "dhgroup": "ffdhe8192" 00:16:41.043 } 00:16:41.043 } 00:16:41.043 ]' 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.043 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.300 22:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODk4YzJmZDMxMjQ1MTljNzAwZWM5Njc1MGEyZjhkYjM5YjJhOTZiNDVkZWI4OWZkZBtlLA==: --dhchap-ctrl-secret DHHC-1:01:MzJkYzFjNGMwOTRhNjRjZWRlOTFiYmUwMTY3ZTIzOTm4K2J/: 00:16:42.671 22:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.671 22:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.671 22:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:42.671 22:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.671 22:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:42.671 22:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.671 22:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.671 22:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:42.671 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.602 00:16:43.602 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.602 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.602 22:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.859 { 00:16:43.859 "cntlid": 143, 00:16:43.859 "qid": 0, 00:16:43.859 "state": "enabled", 00:16:43.859 "thread": "nvmf_tgt_poll_group_000", 00:16:43.859 "listen_address": { 00:16:43.859 "trtype": "TCP", 00:16:43.859 "adrfam": "IPv4", 00:16:43.859 "traddr": "10.0.0.2", 00:16:43.859 "trsvcid": "4420" 00:16:43.859 }, 00:16:43.859 "peer_address": { 00:16:43.859 "trtype": "TCP", 00:16:43.859 "adrfam": "IPv4", 00:16:43.859 "traddr": "10.0.0.1", 00:16:43.859 "trsvcid": "40790" 00:16:43.859 }, 00:16:43.859 "auth": { 00:16:43.859 "state": "completed", 00:16:43.859 "digest": "sha512", 00:16:43.859 "dhgroup": "ffdhe8192" 00:16:43.859 } 00:16:43.859 } 00:16:43.859 ]' 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.859 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.116 22:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.524 22:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.457 00:16:46.457 22:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.457 22:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.457 22:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.716 22:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.716 22:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.716 22:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:46.716 22:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.716 22:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:46.716 22:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.716 { 00:16:46.716 "cntlid": 145, 00:16:46.716 "qid": 0, 00:16:46.716 "state": "enabled", 00:16:46.716 "thread": "nvmf_tgt_poll_group_000", 00:16:46.716 "listen_address": { 00:16:46.716 "trtype": "TCP", 00:16:46.716 "adrfam": "IPv4", 00:16:46.716 "traddr": "10.0.0.2", 00:16:46.716 "trsvcid": "4420" 00:16:46.716 }, 00:16:46.716 "peer_address": { 00:16:46.716 "trtype": "TCP", 00:16:46.716 "adrfam": "IPv4", 00:16:46.716 "traddr": "10.0.0.1", 00:16:46.716 "trsvcid": "40816" 00:16:46.716 }, 00:16:46.716 "auth": { 00:16:46.716 "state": "completed", 00:16:46.716 "digest": "sha512", 00:16:46.716 "dhgroup": "ffdhe8192" 00:16:46.716 } 00:16:46.716 } 00:16:46.716 ]' 00:16:46.716 22:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.716 22:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.716 22:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.716 22:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.716 22:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.716 22:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.716 22:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.716 22:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.974 22:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjljNzA1NmQxMGZmNzhlZDdmNGJkYjgwYWQ3ODNiYTlmZjEwZDBlODUwZjlmZTVhup/mQw==: --dhchap-ctrl-secret DHHC-1:03:ZjJjNjlhODY3NWU2Nzk4NjUyN2VkODU2OWU2ODg4NDFhM2E4NmYxY2ZmNDNiZjE1MTM4YjRjOTY4YzdkZGMwNTOyvQM=: 00:16:47.908 22:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.908 22:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.908 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:47.908 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.908 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:47.908 22:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:47.908 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:47.908 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.166 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:48.166 22:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:48.166 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:16:48.166 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:48.166 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:16:48.166 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:48.166 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:16:48.166 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:48.166 22:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:48.166 22:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:48.754 request: 00:16:48.754 { 00:16:48.754 "name": "nvme0", 00:16:48.754 "trtype": "tcp", 00:16:48.754 "traddr": "10.0.0.2", 00:16:48.754 "adrfam": "ipv4", 00:16:48.754 "trsvcid": "4420", 00:16:48.754 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:48.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:48.754 "prchk_reftag": false, 00:16:48.754 "prchk_guard": false, 00:16:48.754 "hdgst": false, 00:16:48.754 "ddgst": false, 00:16:48.754 "dhchap_key": "key2", 00:16:48.754 "method": "bdev_nvme_attach_controller", 00:16:48.754 "req_id": 1 00:16:48.754 } 00:16:48.754 Got JSON-RPC error response 00:16:48.754 response: 00:16:48.754 { 00:16:48.754 "code": -5, 00:16:48.754 "message": "Input/output error" 00:16:48.754 } 00:16:48.754 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:16:48.754 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:48.755 22:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.688 request: 00:16:49.688 { 00:16:49.688 "name": "nvme0", 00:16:49.688 "trtype": "tcp", 00:16:49.688 "traddr": "10.0.0.2", 00:16:49.688 "adrfam": "ipv4", 00:16:49.688 "trsvcid": "4420", 00:16:49.688 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:49.688 "prchk_reftag": false, 00:16:49.688 "prchk_guard": false, 00:16:49.688 "hdgst": false, 00:16:49.688 "ddgst": false, 00:16:49.688 "dhchap_key": "key1", 00:16:49.688 "dhchap_ctrlr_key": "ckey2", 00:16:49.688 "method": "bdev_nvme_attach_controller", 00:16:49.688 "req_id": 1 00:16:49.688 } 00:16:49.688 Got JSON-RPC error response 00:16:49.688 response: 00:16:49.688 { 00:16:49.688 "code": -5, 00:16:49.688 "message": "Input/output error" 00:16:49.688 } 00:16:49.688 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:16:49.688 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:16:49.688 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:16:49.688 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:16:49.688 22:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.688 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:49.688 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.689 22:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.619 request: 00:16:50.619 { 00:16:50.619 "name": "nvme0", 00:16:50.619 "trtype": "tcp", 00:16:50.619 "traddr": "10.0.0.2", 00:16:50.619 "adrfam": "ipv4", 00:16:50.619 "trsvcid": "4420", 00:16:50.619 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:50.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:50.619 "prchk_reftag": false, 00:16:50.619 "prchk_guard": false, 00:16:50.619 "hdgst": false, 00:16:50.619 "ddgst": false, 00:16:50.619 "dhchap_key": "key1", 00:16:50.619 "dhchap_ctrlr_key": "ckey1", 00:16:50.619 "method": "bdev_nvme_attach_controller", 00:16:50.619 "req_id": 1 00:16:50.619 } 00:16:50.619 Got JSON-RPC error response 00:16:50.619 response: 00:16:50.619 { 00:16:50.619 "code": -5, 00:16:50.619 "message": "Input/output error" 00:16:50.619 } 00:16:50.619 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:16:50.619 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:16:50.619 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:16:50.619 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:16:50.619 22:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.619 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:50.619 22:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1248240 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1248240 ']' 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1248240 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1248240 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1248240' 00:16:50.619 killing process with pid 1248240 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1248240 00:16:50.619 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1248240 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1271040 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1271040 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1271040 ']' 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:50.877 22:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1271040 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1271040 ']' 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:52.250 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.510 22:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.444 00:16:53.444 22:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.444 22:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.444 22:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.703 { 00:16:53.703 "cntlid": 1, 00:16:53.703 "qid": 0, 00:16:53.703 "state": "enabled", 00:16:53.703 "thread": "nvmf_tgt_poll_group_000", 00:16:53.703 "listen_address": { 00:16:53.703 "trtype": "TCP", 00:16:53.703 "adrfam": "IPv4", 00:16:53.703 "traddr": "10.0.0.2", 00:16:53.703 "trsvcid": "4420" 00:16:53.703 }, 00:16:53.703 "peer_address": { 00:16:53.703 "trtype": "TCP", 00:16:53.703 "adrfam": "IPv4", 00:16:53.703 "traddr": "10.0.0.1", 00:16:53.703 "trsvcid": "48192" 00:16:53.703 }, 00:16:53.703 "auth": { 00:16:53.703 "state": "completed", 00:16:53.703 "digest": "sha512", 00:16:53.703 "dhgroup": "ffdhe8192" 00:16:53.703 } 00:16:53.703 } 00:16:53.703 ]' 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.703 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.961 22:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODBjNWYwOTA0NGQ2M2JlNzQ4OWYyNGE3NTdmMzFjNmJhNjNiZDhiM2VhYjA3ZjBjYzFmNzJhNWUyYWZmYzE2MfSBhX4=: 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:54.894 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:55.150 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.150 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:16:55.150 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.150 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:16:55.150 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:55.150 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:16:55.150 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:55.150 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.150 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.407 request: 00:16:55.407 { 00:16:55.407 "name": "nvme0", 00:16:55.407 "trtype": "tcp", 00:16:55.407 "traddr": "10.0.0.2", 00:16:55.407 "adrfam": "ipv4", 00:16:55.407 "trsvcid": "4420", 00:16:55.407 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:55.407 "prchk_reftag": false, 00:16:55.407 "prchk_guard": false, 00:16:55.407 "hdgst": false, 00:16:55.407 "ddgst": false, 00:16:55.407 "dhchap_key": "key3", 00:16:55.407 "method": "bdev_nvme_attach_controller", 00:16:55.407 "req_id": 1 00:16:55.407 } 00:16:55.407 Got JSON-RPC error response 00:16:55.407 response: 00:16:55.407 { 00:16:55.407 "code": -5, 00:16:55.407 "message": "Input/output error" 00:16:55.407 } 00:16:55.407 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:16:55.407 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:16:55.407 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:16:55.407 22:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:16:55.407 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:55.407 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:55.407 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:55.407 22:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:55.665 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.665 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:16:55.665 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.665 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:16:55.665 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:55.665 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:16:55.665 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:55.665 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.665 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.922 request: 00:16:55.922 { 00:16:55.922 "name": "nvme0", 00:16:55.922 "trtype": "tcp", 00:16:55.922 "traddr": "10.0.0.2", 00:16:55.922 "adrfam": "ipv4", 00:16:55.922 "trsvcid": "4420", 00:16:55.922 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:55.922 "prchk_reftag": false, 00:16:55.922 "prchk_guard": false, 00:16:55.922 "hdgst": false, 00:16:55.922 "ddgst": false, 00:16:55.922 "dhchap_key": "key3", 00:16:55.922 "method": "bdev_nvme_attach_controller", 00:16:55.922 "req_id": 1 00:16:55.922 } 00:16:55.922 Got JSON-RPC error response 00:16:55.922 response: 00:16:55.922 { 00:16:55.922 "code": -5, 00:16:55.922 "message": "Input/output error" 00:16:55.922 } 00:16:55.922 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:16:55.922 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:16:55.922 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:16:55.922 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:16:55.922 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:55.922 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:55.922 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:55.922 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.922 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.922 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.189 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.446 request: 00:16:56.446 { 00:16:56.446 "name": "nvme0", 00:16:56.446 "trtype": "tcp", 00:16:56.446 "traddr": "10.0.0.2", 00:16:56.446 "adrfam": "ipv4", 00:16:56.446 "trsvcid": "4420", 00:16:56.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:56.446 "prchk_reftag": false, 00:16:56.446 "prchk_guard": false, 00:16:56.446 "hdgst": false, 00:16:56.446 "ddgst": false, 00:16:56.446 "dhchap_key": "key0", 00:16:56.446 "dhchap_ctrlr_key": "key1", 00:16:56.446 "method": "bdev_nvme_attach_controller", 00:16:56.446 "req_id": 1 00:16:56.446 } 00:16:56.446 Got JSON-RPC error response 00:16:56.446 response: 00:16:56.446 { 00:16:56.446 "code": -5, 00:16:56.446 "message": "Input/output error" 00:16:56.446 } 00:16:56.446 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:16:56.446 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:16:56.447 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:16:56.447 22:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:16:56.447 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:56.447 22:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:56.704 00:16:56.960 22:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:56.960 22:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.960 22:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:56.960 22:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.960 22:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.960 22:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.216 22:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:57.216 22:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:57.216 22:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1248332 00:16:57.216 22:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1248332 ']' 00:16:57.216 22:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1248332 00:16:57.216 22:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:16:57.216 22:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:16:57.216 22:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1248332 00:16:57.472 22:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:16:57.472 22:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:16:57.472 22:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1248332' 00:16:57.472 killing process with pid 1248332 00:16:57.472 22:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1248332 00:16:57.472 22:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1248332 00:16:57.729 22:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:57.729 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.729 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:57.729 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.729 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:57.729 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.729 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.729 rmmod nvme_tcp 00:16:57.986 rmmod nvme_fabrics 00:16:57.986 rmmod nvme_keyring 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1271040 ']' 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1271040 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1271040 ']' 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1271040 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1271040 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1271040' 00:16:57.986 killing process with pid 1271040 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1271040 00:16:57.986 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1271040 00:16:58.244 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.244 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.244 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.244 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.244 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.244 22:40:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.244 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.244 22:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.149 22:40:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.149 22:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.QZe /tmp/spdk.key-sha256.MXe /tmp/spdk.key-sha384.vRW /tmp/spdk.key-sha512.P3d /tmp/spdk.key-sha512.f0C /tmp/spdk.key-sha384.GlI /tmp/spdk.key-sha256.WgZ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:00.149 00:17:00.149 real 3m11.285s 00:17:00.149 user 7m25.575s 00:17:00.149 sys 0m25.024s 00:17:00.149 22:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:00.149 22:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.149 ************************************ 00:17:00.149 END TEST nvmf_auth_target 00:17:00.149 ************************************ 00:17:00.149 22:40:43 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:17:00.149 22:40:43 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:00.149 22:40:43 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:00.149 22:40:43 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:17:00.149 22:40:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:00.149 22:40:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.407 ************************************ 00:17:00.407 START TEST nvmf_bdevio_no_huge 00:17:00.407 ************************************ 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:00.407 * Looking for test storage... 00:17:00.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.407 22:40:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:02.345 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:02.345 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:02.345 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.345 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:02.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:17:02.345 00:17:02.345 --- 10.0.0.2 ping statistics --- 00:17:02.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.345 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:17:02.345 00:17:02.345 --- 10.0.0.1 ping statistics --- 00:17:02.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.345 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1273815 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1273815 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@823 -- # '[' -z 1273815 ']' 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:02.345 22:40:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.346 [2024-07-15 22:40:45.836070] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:02.346 [2024-07-15 22:40:45.836175] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:02.604 [2024-07-15 22:40:45.908541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.604 [2024-07-15 22:40:46.016301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.604 [2024-07-15 22:40:46.016359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.604 [2024-07-15 22:40:46.016388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.604 [2024-07-15 22:40:46.016399] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.604 [2024-07-15 22:40:46.016409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.604 [2024-07-15 22:40:46.016499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:02.604 [2024-07-15 22:40:46.016564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:02.604 [2024-07-15 22:40:46.016629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:02.604 [2024-07-15 22:40:46.016631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # return 0 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.860 [2024-07-15 22:40:46.145760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.860 Malloc0 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.860 [2024-07-15 22:40:46.183779] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.860 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.860 { 00:17:02.860 "params": { 00:17:02.860 "name": "Nvme$subsystem", 00:17:02.860 "trtype": "$TEST_TRANSPORT", 00:17:02.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.860 "adrfam": "ipv4", 00:17:02.860 "trsvcid": "$NVMF_PORT", 00:17:02.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.860 "hdgst": ${hdgst:-false}, 00:17:02.860 "ddgst": ${ddgst:-false} 00:17:02.860 }, 00:17:02.860 "method": "bdev_nvme_attach_controller" 00:17:02.861 } 00:17:02.861 EOF 00:17:02.861 )") 00:17:02.861 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:02.861 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:02.861 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:02.861 22:40:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.861 "params": { 00:17:02.861 "name": "Nvme1", 00:17:02.861 "trtype": "tcp", 00:17:02.861 "traddr": "10.0.0.2", 00:17:02.861 "adrfam": "ipv4", 00:17:02.861 "trsvcid": "4420", 00:17:02.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.861 "hdgst": false, 00:17:02.861 "ddgst": false 00:17:02.861 }, 00:17:02.861 "method": "bdev_nvme_attach_controller" 00:17:02.861 }' 00:17:02.861 [2024-07-15 22:40:46.232038] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:02.861 [2024-07-15 22:40:46.232111] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1273845 ] 00:17:02.861 [2024-07-15 22:40:46.294573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:03.118 [2024-07-15 22:40:46.409622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.118 [2024-07-15 22:40:46.409684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.118 [2024-07-15 22:40:46.409687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.118 I/O targets: 00:17:03.118 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:03.118 00:17:03.118 00:17:03.118 CUnit - A unit testing framework for C - Version 2.1-3 00:17:03.118 http://cunit.sourceforge.net/ 00:17:03.118 00:17:03.118 00:17:03.118 Suite: bdevio tests on: Nvme1n1 00:17:03.118 Test: blockdev write read block ...passed 00:17:03.376 Test: blockdev write zeroes read block ...passed 00:17:03.376 Test: blockdev write zeroes read no split ...passed 00:17:03.376 Test: blockdev write zeroes read split ...passed 00:17:03.376 Test: blockdev write zeroes read split partial ...passed 00:17:03.376 Test: blockdev reset ...[2024-07-15 22:40:46.791381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:03.376 [2024-07-15 22:40:46.791501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f66fb0 (9): Bad file descriptor 00:17:03.376 [2024-07-15 22:40:46.848335] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:03.376 passed 00:17:03.376 Test: blockdev write read 8 blocks ...passed 00:17:03.376 Test: blockdev write read size > 128k ...passed 00:17:03.376 Test: blockdev write read invalid size ...passed 00:17:03.635 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.635 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.635 Test: blockdev write read max offset ...passed 00:17:03.635 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.635 Test: blockdev writev readv 8 blocks ...passed 00:17:03.635 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.635 Test: blockdev writev readv block ...passed 00:17:03.635 Test: blockdev writev readv size > 128k ...passed 00:17:03.635 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.635 Test: blockdev comparev and writev ...[2024-07-15 22:40:47.026690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.635 [2024-07-15 22:40:47.026729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.635 [2024-07-15 22:40:47.026766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.635 [2024-07-15 22:40:47.026796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:03.635 [2024-07-15 22:40:47.027219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.635 [2024-07-15 22:40:47.027247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:03.635 [2024-07-15 22:40:47.027283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.635 [2024-07-15 22:40:47.027311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:03.635 [2024-07-15 22:40:47.027755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.635 [2024-07-15 22:40:47.027781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:03.635 [2024-07-15 22:40:47.027816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.635 [2024-07-15 22:40:47.027843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:03.635 [2024-07-15 22:40:47.028286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.635 [2024-07-15 22:40:47.028313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:03.635 [2024-07-15 22:40:47.028348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.635 [2024-07-15 22:40:47.028375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:03.635 passed 00:17:03.635 Test: blockdev nvme passthru rw ...passed 00:17:03.635 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:40:47.111274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.635 [2024-07-15 22:40:47.111304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:03.635 [2024-07-15 22:40:47.111532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.635 [2024-07-15 22:40:47.111565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:03.635 [2024-07-15 22:40:47.111822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.635 [2024-07-15 22:40:47.111848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:03.635 [2024-07-15 22:40:47.112093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.635 [2024-07-15 22:40:47.112119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:03.635 passed 00:17:03.635 Test: blockdev nvme admin passthru ...passed 00:17:03.893 Test: blockdev copy ...passed 00:17:03.894 00:17:03.894 Run Summary: Type Total Ran Passed Failed Inactive 00:17:03.894 suites 1 1 n/a 0 0 00:17:03.894 tests 23 23 23 0 0 00:17:03.894 asserts 152 152 152 0 n/a 00:17:03.894 00:17:03.894 Elapsed time = 1.191 seconds 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:04.152 rmmod nvme_tcp 00:17:04.152 rmmod nvme_fabrics 00:17:04.152 rmmod nvme_keyring 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1273815 ']' 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1273815 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@942 -- # '[' -z 1273815 ']' 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # kill -0 1273815 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # uname 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1273815 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # process_name=reactor_3 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' reactor_3 = sudo ']' 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1273815' 00:17:04.152 killing process with pid 1273815 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@961 -- # kill 1273815 00:17:04.152 22:40:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # wait 1273815 00:17:04.718 22:40:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.718 22:40:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.718 22:40:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.718 22:40:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.718 22:40:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.718 22:40:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.718 22:40:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.718 22:40:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.620 22:40:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.620 00:17:06.620 real 0m6.403s 00:17:06.620 user 0m10.373s 00:17:06.620 sys 0m2.449s 00:17:06.620 22:40:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:06.620 22:40:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:06.620 ************************************ 00:17:06.620 END TEST nvmf_bdevio_no_huge 00:17:06.620 ************************************ 00:17:06.620 22:40:50 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:17:06.620 22:40:50 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:06.620 22:40:50 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:17:06.620 22:40:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:06.620 22:40:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.620 ************************************ 00:17:06.620 START TEST nvmf_tls 00:17:06.620 ************************************ 00:17:06.620 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:06.879 * Looking for test storage... 00:17:06.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.879 22:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.805 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.805 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.806 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.806 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.806 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.806 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:09.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:17:09.064 00:17:09.064 --- 10.0.0.2 ping statistics --- 00:17:09.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.064 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:17:09.064 00:17:09.064 --- 10.0.0.1 ping statistics --- 00:17:09.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.064 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1276033 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1276033 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1276033 ']' 00:17:09.064 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.065 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:09.065 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.065 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:09.065 22:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.065 [2024-07-15 22:40:52.460192] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:09.065 [2024-07-15 22:40:52.460282] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.065 [2024-07-15 22:40:52.530297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.323 [2024-07-15 22:40:52.645568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.323 [2024-07-15 22:40:52.645625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.323 [2024-07-15 22:40:52.645640] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.323 [2024-07-15 22:40:52.645654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.323 [2024-07-15 22:40:52.645667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.323 [2024-07-15 22:40:52.645695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.904 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:09.904 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:09.904 22:40:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.904 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.904 22:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.168 22:40:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.168 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:10.168 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:10.168 true 00:17:10.425 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:10.425 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:10.425 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:10.425 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:10.425 22:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:10.682 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:10.682 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:10.938 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:10.938 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:10.938 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:11.195 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.195 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:11.452 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:11.452 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:11.452 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.452 22:40:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:11.710 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:11.710 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:11.710 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:11.966 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.966 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:12.224 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:12.224 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:12.224 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:12.482 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:12.482 22:40:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:12.739 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:12.739 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:12.739 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:12.739 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:12.739 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:12.739 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:12.739 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:12.739 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:12.739 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:12.739 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.cbtL5fUbhG 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.8TGxuJYtkm 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.cbtL5fUbhG 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.8TGxuJYtkm 00:17:12.997 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:13.254 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:13.512 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.cbtL5fUbhG 00:17:13.512 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cbtL5fUbhG 00:17:13.512 22:40:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:13.769 [2024-07-15 22:40:57.222870] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.769 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:14.027 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:14.284 [2024-07-15 22:40:57.760386] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:14.284 [2024-07-15 22:40:57.760633] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.284 22:40:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:14.541 malloc0 00:17:14.541 22:40:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:14.798 22:40:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cbtL5fUbhG 00:17:15.055 [2024-07-15 22:40:58.489531] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:15.055 22:40:58 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.cbtL5fUbhG 00:17:27.283 Initializing NVMe Controllers 00:17:27.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:27.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:27.283 Initialization complete. Launching workers. 00:17:27.283 ======================================================== 00:17:27.283 Latency(us) 00:17:27.283 Device Information : IOPS MiB/s Average min max 00:17:27.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7870.96 30.75 8133.70 1349.01 9711.69 00:17:27.283 ======================================================== 00:17:27.283 Total : 7870.96 30.75 8133.70 1349.01 9711.69 00:17:27.283 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cbtL5fUbhG 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cbtL5fUbhG' 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1277938 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1277938 /var/tmp/bdevperf.sock 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1277938 ']' 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.283 [2024-07-15 22:41:08.664300] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:27.283 [2024-07-15 22:41:08.664377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277938 ] 00:17:27.283 [2024-07-15 22:41:08.720239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.283 [2024-07-15 22:41:08.824434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:27.283 22:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:27.284 22:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cbtL5fUbhG 00:17:27.284 [2024-07-15 22:41:09.209846] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:27.284 [2024-07-15 22:41:09.209999] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:27.284 TLSTESTn1 00:17:27.284 22:41:09 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:27.284 Running I/O for 10 seconds... 00:17:37.246 00:17:37.246 Latency(us) 00:17:37.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.246 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:37.246 Verification LBA range: start 0x0 length 0x2000 00:17:37.246 TLSTESTn1 : 10.06 1898.06 7.41 0.00 0.00 67231.07 6553.60 98643.82 00:17:37.246 =================================================================================================================== 00:17:37.246 Total : 1898.06 7.41 0.00 0.00 67231.07 6553.60 98643.82 00:17:37.246 0 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1277938 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1277938 ']' 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1277938 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1277938 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1277938' 00:17:37.246 killing process with pid 1277938 00:17:37.246 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1277938 00:17:37.246 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.246 00:17:37.247 Latency(us) 00:17:37.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.247 =================================================================================================================== 00:17:37.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.247 [2024-07-15 22:41:19.524593] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1277938 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8TGxuJYtkm 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8TGxuJYtkm 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8TGxuJYtkm 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8TGxuJYtkm' 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1279249 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1279249 /var/tmp/bdevperf.sock 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1279249 ']' 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:37.247 22:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.247 [2024-07-15 22:41:19.840022] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:37.247 [2024-07-15 22:41:19.840110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279249 ] 00:17:37.247 [2024-07-15 22:41:19.898564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.247 [2024-07-15 22:41:20.008265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8TGxuJYtkm 00:17:37.247 [2024-07-15 22:41:20.398827] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.247 [2024-07-15 22:41:20.398983] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:37.247 [2024-07-15 22:41:20.404397] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:37.247 [2024-07-15 22:41:20.404847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bef90 (107): Transport endpoint is not connected 00:17:37.247 [2024-07-15 22:41:20.405828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bef90 (9): Bad file descriptor 00:17:37.247 [2024-07-15 22:41:20.406829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:37.247 [2024-07-15 22:41:20.406852] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:37.247 [2024-07-15 22:41:20.406899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:37.247 request: 00:17:37.247 { 00:17:37.247 "name": "TLSTEST", 00:17:37.247 "trtype": "tcp", 00:17:37.247 "traddr": "10.0.0.2", 00:17:37.247 "adrfam": "ipv4", 00:17:37.247 "trsvcid": "4420", 00:17:37.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.247 "prchk_reftag": false, 00:17:37.247 "prchk_guard": false, 00:17:37.247 "hdgst": false, 00:17:37.247 "ddgst": false, 00:17:37.247 "psk": "/tmp/tmp.8TGxuJYtkm", 00:17:37.247 "method": "bdev_nvme_attach_controller", 00:17:37.247 "req_id": 1 00:17:37.247 } 00:17:37.247 Got JSON-RPC error response 00:17:37.247 response: 00:17:37.247 { 00:17:37.247 "code": -5, 00:17:37.247 "message": "Input/output error" 00:17:37.247 } 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1279249 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1279249 ']' 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1279249 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1279249 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1279249' 00:17:37.247 killing process with pid 1279249 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1279249 00:17:37.247 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.247 00:17:37.247 Latency(us) 00:17:37.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.247 =================================================================================================================== 00:17:37.247 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:37.247 [2024-07-15 22:41:20.459291] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1279249 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cbtL5fUbhG 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cbtL5fUbhG 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cbtL5fUbhG 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cbtL5fUbhG' 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1279364 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1279364 /var/tmp/bdevperf.sock 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1279364 ']' 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:37.247 22:41:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.504 [2024-07-15 22:41:20.758643] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:37.504 [2024-07-15 22:41:20.758737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279364 ] 00:17:37.504 [2024-07-15 22:41:20.820219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.505 [2024-07-15 22:41:20.932463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.761 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:37.761 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:37.761 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.cbtL5fUbhG 00:17:38.076 [2024-07-15 22:41:21.262665] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.076 [2024-07-15 22:41:21.262794] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:38.076 [2024-07-15 22:41:21.273698] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:38.076 [2024-07-15 22:41:21.273729] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:38.076 [2024-07-15 22:41:21.273784] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:38.076 [2024-07-15 22:41:21.274751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bf90 (107): Transport endpoint is not connected 00:17:38.076 [2024-07-15 22:41:21.275740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bf90 (9): Bad file descriptor 00:17:38.076 [2024-07-15 22:41:21.276739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:38.076 [2024-07-15 22:41:21.276763] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:38.076 [2024-07-15 22:41:21.276790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:38.076 request: 00:17:38.076 { 00:17:38.076 "name": "TLSTEST", 00:17:38.076 "trtype": "tcp", 00:17:38.076 "traddr": "10.0.0.2", 00:17:38.076 "adrfam": "ipv4", 00:17:38.076 "trsvcid": "4420", 00:17:38.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.076 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:38.076 "prchk_reftag": false, 00:17:38.076 "prchk_guard": false, 00:17:38.076 "hdgst": false, 00:17:38.076 "ddgst": false, 00:17:38.076 "psk": "/tmp/tmp.cbtL5fUbhG", 00:17:38.076 "method": "bdev_nvme_attach_controller", 00:17:38.076 "req_id": 1 00:17:38.076 } 00:17:38.076 Got JSON-RPC error response 00:17:38.076 response: 00:17:38.076 { 00:17:38.076 "code": -5, 00:17:38.076 "message": "Input/output error" 00:17:38.076 } 00:17:38.076 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1279364 00:17:38.076 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1279364 ']' 00:17:38.077 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1279364 00:17:38.077 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:17:38.077 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:38.077 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1279364 00:17:38.077 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:17:38.077 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:17:38.077 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1279364' 00:17:38.077 killing process with pid 1279364 00:17:38.077 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1279364 00:17:38.077 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.077 00:17:38.077 Latency(us) 00:17:38.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.077 =================================================================================================================== 00:17:38.077 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.077 [2024-07-15 22:41:21.329215] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.077 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1279364 00:17:38.333 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:38.333 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:17:38.333 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:38.333 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:38.333 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:38.333 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cbtL5fUbhG 00:17:38.333 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:17:38.333 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cbtL5fUbhG 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cbtL5fUbhG 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cbtL5fUbhG' 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1279412 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1279412 /var/tmp/bdevperf.sock 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1279412 ']' 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:38.334 22:41:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.334 [2024-07-15 22:41:21.643943] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:38.334 [2024-07-15 22:41:21.644029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279412 ] 00:17:38.334 [2024-07-15 22:41:21.727289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.591 [2024-07-15 22:41:21.880660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.591 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:38.591 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:38.591 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cbtL5fUbhG 00:17:38.848 [2024-07-15 22:41:22.253928] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.848 [2024-07-15 22:41:22.254055] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:38.848 [2024-07-15 22:41:22.259269] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:38.848 [2024-07-15 22:41:22.259304] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:38.848 [2024-07-15 22:41:22.259356] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:38.848 [2024-07-15 22:41:22.259867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2233f90 (107): Transport endpoint is not connected 00:17:38.848 [2024-07-15 22:41:22.260855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2233f90 (9): Bad file descriptor 00:17:38.848 [2024-07-15 22:41:22.261854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:38.848 [2024-07-15 22:41:22.261896] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:38.848 [2024-07-15 22:41:22.261915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:38.848 request: 00:17:38.848 { 00:17:38.848 "name": "TLSTEST", 00:17:38.848 "trtype": "tcp", 00:17:38.848 "traddr": "10.0.0.2", 00:17:38.848 "adrfam": "ipv4", 00:17:38.848 "trsvcid": "4420", 00:17:38.848 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:38.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.848 "prchk_reftag": false, 00:17:38.848 "prchk_guard": false, 00:17:38.848 "hdgst": false, 00:17:38.848 "ddgst": false, 00:17:38.848 "psk": "/tmp/tmp.cbtL5fUbhG", 00:17:38.848 "method": "bdev_nvme_attach_controller", 00:17:38.848 "req_id": 1 00:17:38.848 } 00:17:38.848 Got JSON-RPC error response 00:17:38.848 response: 00:17:38.848 { 00:17:38.848 "code": -5, 00:17:38.848 "message": "Input/output error" 00:17:38.848 } 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1279412 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1279412 ']' 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1279412 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1279412 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1279412' 00:17:38.848 killing process with pid 1279412 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1279412 00:17:38.848 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.848 00:17:38.848 Latency(us) 00:17:38.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.848 =================================================================================================================== 00:17:38.848 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.848 [2024-07-15 22:41:22.315229] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.848 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1279412 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1279548 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1279548 /var/tmp/bdevperf.sock 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1279548 ']' 00:17:39.104 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.105 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:39.105 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.105 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:39.105 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.361 [2024-07-15 22:41:22.622005] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:39.361 [2024-07-15 22:41:22.622082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279548 ] 00:17:39.361 [2024-07-15 22:41:22.679786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.361 [2024-07-15 22:41:22.781967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.619 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:39.619 22:41:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:39.619 22:41:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:39.878 [2024-07-15 22:41:23.138600] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:39.878 [2024-07-15 22:41:23.140508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88770 (9): Bad file descriptor 00:17:39.878 [2024-07-15 22:41:23.141503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:39.878 [2024-07-15 22:41:23.141523] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:39.878 [2024-07-15 22:41:23.141554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:39.878 request: 00:17:39.878 { 00:17:39.878 "name": "TLSTEST", 00:17:39.878 "trtype": "tcp", 00:17:39.878 "traddr": "10.0.0.2", 00:17:39.878 "adrfam": "ipv4", 00:17:39.878 "trsvcid": "4420", 00:17:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.878 "prchk_reftag": false, 00:17:39.878 "prchk_guard": false, 00:17:39.878 "hdgst": false, 00:17:39.878 "ddgst": false, 00:17:39.878 "method": "bdev_nvme_attach_controller", 00:17:39.878 "req_id": 1 00:17:39.878 } 00:17:39.878 Got JSON-RPC error response 00:17:39.878 response: 00:17:39.878 { 00:17:39.878 "code": -5, 00:17:39.878 "message": "Input/output error" 00:17:39.878 } 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1279548 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1279548 ']' 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1279548 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1279548 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1279548' 00:17:39.878 killing process with pid 1279548 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1279548 00:17:39.878 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.878 00:17:39.878 Latency(us) 00:17:39.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.878 =================================================================================================================== 00:17:39.878 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.878 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1279548 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1276033 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1276033 ']' 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1276033 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1276033 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1276033' 00:17:40.136 killing process with pid 1276033 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1276033 00:17:40.136 [2024-07-15 22:41:23.442374] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:40.136 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1276033 00:17:40.394 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:40.394 22:41:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:40.394 22:41:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.394 22:41:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:40.394 22:41:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:40.394 22:41:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:40.394 22:41:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:40.394 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:40.394 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.myCErdnqKM 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.myCErdnqKM 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1279701 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1279701 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1279701 ']' 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:40.395 22:41:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.395 [2024-07-15 22:41:23.846307] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:40.395 [2024-07-15 22:41:23.846378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.653 [2024-07-15 22:41:23.912900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.653 [2024-07-15 22:41:24.032760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.653 [2024-07-15 22:41:24.032826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.653 [2024-07-15 22:41:24.032843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.653 [2024-07-15 22:41:24.032857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.653 [2024-07-15 22:41:24.032868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.653 [2024-07-15 22:41:24.032910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.653 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:40.653 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:40.653 22:41:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.653 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:40.653 22:41:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.910 22:41:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.910 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.myCErdnqKM 00:17:40.910 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.myCErdnqKM 00:17:40.910 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:41.168 [2024-07-15 22:41:24.434002] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.168 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:41.426 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:41.426 [2024-07-15 22:41:24.919294] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:41.426 [2024-07-15 22:41:24.919531] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.685 22:41:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:41.943 malloc0 00:17:41.943 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:42.201 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.myCErdnqKM 00:17:42.460 [2024-07-15 22:41:25.797405] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.myCErdnqKM 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.myCErdnqKM' 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1279983 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1279983 /var/tmp/bdevperf.sock 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1279983 ']' 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:42.460 22:41:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.460 [2024-07-15 22:41:25.862889] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:42.460 [2024-07-15 22:41:25.862962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279983 ] 00:17:42.460 [2024-07-15 22:41:25.919038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.719 [2024-07-15 22:41:26.024443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.719 22:41:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:42.719 22:41:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:42.719 22:41:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.myCErdnqKM 00:17:42.978 [2024-07-15 22:41:26.366377] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.978 [2024-07-15 22:41:26.366494] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:42.978 TLSTESTn1 00:17:42.978 22:41:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:43.235 Running I/O for 10 seconds... 00:17:53.209 00:17:53.209 Latency(us) 00:17:53.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.209 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.209 Verification LBA range: start 0x0 length 0x2000 00:17:53.209 TLSTESTn1 : 10.06 1849.35 7.22 0.00 0.00 69012.54 5873.97 100973.99 00:17:53.209 =================================================================================================================== 00:17:53.209 Total : 1849.35 7.22 0.00 0.00 69012.54 5873.97 100973.99 00:17:53.209 0 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1279983 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1279983 ']' 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1279983 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1279983 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1279983' 00:17:53.209 killing process with pid 1279983 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1279983 00:17:53.209 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.209 00:17:53.209 Latency(us) 00:17:53.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.209 =================================================================================================================== 00:17:53.209 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.209 [2024-07-15 22:41:36.689012] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:53.209 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1279983 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.myCErdnqKM 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.myCErdnqKM 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.myCErdnqKM 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.myCErdnqKM 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.myCErdnqKM' 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1281300 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1281300 /var/tmp/bdevperf.sock 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1281300 ']' 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:53.478 22:41:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.737 [2024-07-15 22:41:37.009997] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:53.737 [2024-07-15 22:41:37.010073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281300 ] 00:17:53.737 [2024-07-15 22:41:37.068200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.737 [2024-07-15 22:41:37.177371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.995 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:53.995 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:53.995 22:41:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.myCErdnqKM 00:17:54.254 [2024-07-15 22:41:37.510715] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.254 [2024-07-15 22:41:37.510788] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:54.254 [2024-07-15 22:41:37.510807] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.myCErdnqKM 00:17:54.254 request: 00:17:54.254 { 00:17:54.254 "name": "TLSTEST", 00:17:54.254 "trtype": "tcp", 00:17:54.254 "traddr": "10.0.0.2", 00:17:54.254 "adrfam": "ipv4", 00:17:54.254 "trsvcid": "4420", 00:17:54.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.254 "prchk_reftag": false, 00:17:54.254 "prchk_guard": false, 00:17:54.254 "hdgst": false, 00:17:54.254 "ddgst": false, 00:17:54.254 "psk": "/tmp/tmp.myCErdnqKM", 00:17:54.254 "method": "bdev_nvme_attach_controller", 00:17:54.254 "req_id": 1 00:17:54.254 } 00:17:54.254 Got JSON-RPC error response 00:17:54.254 response: 00:17:54.254 { 00:17:54.254 "code": -1, 00:17:54.254 "message": "Operation not permitted" 00:17:54.254 } 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1281300 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1281300 ']' 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1281300 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1281300 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1281300' 00:17:54.254 killing process with pid 1281300 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1281300 00:17:54.254 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.254 00:17:54.254 Latency(us) 00:17:54.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.254 =================================================================================================================== 00:17:54.254 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:54.254 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1281300 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1279701 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1279701 ']' 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1279701 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1279701 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1279701' 00:17:54.512 killing process with pid 1279701 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1279701 00:17:54.512 [2024-07-15 22:41:37.847423] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:54.512 22:41:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1279701 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1281446 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1281446 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1281446 ']' 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:54.770 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.770 [2024-07-15 22:41:38.196312] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:54.770 [2024-07-15 22:41:38.196389] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.770 [2024-07-15 22:41:38.260789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.027 [2024-07-15 22:41:38.366285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.027 [2024-07-15 22:41:38.366343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.027 [2024-07-15 22:41:38.366356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.027 [2024-07-15 22:41:38.366366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.027 [2024-07-15 22:41:38.366375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.027 [2024-07-15 22:41:38.366401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.myCErdnqKM 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.myCErdnqKM 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=setup_nvmf_tgt 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t setup_nvmf_tgt 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # setup_nvmf_tgt /tmp/tmp.myCErdnqKM 00:17:55.027 22:41:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.myCErdnqKM 00:17:55.028 22:41:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:55.284 [2024-07-15 22:41:38.730599] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.284 22:41:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:55.541 22:41:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:55.798 [2024-07-15 22:41:39.260061] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:55.799 [2024-07-15 22:41:39.260340] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.799 22:41:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:56.055 malloc0 00:17:56.056 22:41:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:56.313 22:41:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.myCErdnqKM 00:17:56.569 [2024-07-15 22:41:39.990231] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:56.569 [2024-07-15 22:41:39.990268] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:56.569 [2024-07-15 22:41:39.990313] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:56.569 request: 00:17:56.569 { 00:17:56.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.569 "host": "nqn.2016-06.io.spdk:host1", 00:17:56.569 "psk": "/tmp/tmp.myCErdnqKM", 00:17:56.569 "method": "nvmf_subsystem_add_host", 00:17:56.569 "req_id": 1 00:17:56.569 } 00:17:56.569 Got JSON-RPC error response 00:17:56.569 response: 00:17:56.569 { 00:17:56.569 "code": -32603, 00:17:56.569 "message": "Internal error" 00:17:56.569 } 00:17:56.569 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:17:56.569 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:56.569 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:56.569 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:56.569 22:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1281446 00:17:56.569 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1281446 ']' 00:17:56.569 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1281446 00:17:56.570 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:17:56.570 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:56.570 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1281446 00:17:56.570 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:17:56.570 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:17:56.570 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1281446' 00:17:56.570 killing process with pid 1281446 00:17:56.570 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1281446 00:17:56.570 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1281446 00:17:56.826 22:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.myCErdnqKM 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1281738 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1281738 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1281738 ']' 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:56.827 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.084 [2024-07-15 22:41:40.361935] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:57.084 [2024-07-15 22:41:40.362010] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.084 [2024-07-15 22:41:40.427833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.084 [2024-07-15 22:41:40.545995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.084 [2024-07-15 22:41:40.546062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.084 [2024-07-15 22:41:40.546077] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.084 [2024-07-15 22:41:40.546088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.084 [2024-07-15 22:41:40.546097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.084 [2024-07-15 22:41:40.546126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.341 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:57.341 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:57.341 22:41:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.341 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:57.341 22:41:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.341 22:41:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.341 22:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.myCErdnqKM 00:17:57.341 22:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.myCErdnqKM 00:17:57.341 22:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:57.598 [2024-07-15 22:41:40.913243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.598 22:41:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:57.855 22:41:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:58.114 [2024-07-15 22:41:41.490805] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:58.114 [2024-07-15 22:41:41.491066] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.114 22:41:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:58.372 malloc0 00:17:58.372 22:41:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:58.630 22:41:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.myCErdnqKM 00:17:58.888 [2024-07-15 22:41:42.304265] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:58.888 22:41:42 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1281933 00:17:58.888 22:41:42 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.888 22:41:42 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.888 22:41:42 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1281933 /var/tmp/bdevperf.sock 00:17:58.888 22:41:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1281933 ']' 00:17:58.888 22:41:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.888 22:41:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:58.888 22:41:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.888 22:41:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:58.889 22:41:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.889 [2024-07-15 22:41:42.370596] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:17:58.889 [2024-07-15 22:41:42.370682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281933 ] 00:17:59.147 [2024-07-15 22:41:42.430914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.147 [2024-07-15 22:41:42.534995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.147 22:41:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:59.147 22:41:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:59.147 22:41:42 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.myCErdnqKM 00:17:59.405 [2024-07-15 22:41:42.891428] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.405 [2024-07-15 22:41:42.891551] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:59.662 TLSTESTn1 00:17:59.662 22:41:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:59.921 22:41:43 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:59.921 "subsystems": [ 00:17:59.921 { 00:17:59.921 "subsystem": "keyring", 00:17:59.921 "config": [] 00:17:59.921 }, 00:17:59.921 { 00:17:59.921 "subsystem": "iobuf", 00:17:59.921 "config": [ 00:17:59.921 { 00:17:59.921 "method": "iobuf_set_options", 00:17:59.921 "params": { 00:17:59.921 "small_pool_count": 8192, 00:17:59.921 "large_pool_count": 1024, 00:17:59.921 "small_bufsize": 8192, 00:17:59.921 "large_bufsize": 135168 00:17:59.921 } 00:17:59.921 } 00:17:59.921 ] 00:17:59.921 }, 00:17:59.921 { 00:17:59.921 "subsystem": "sock", 00:17:59.921 "config": [ 00:17:59.921 { 00:17:59.921 "method": "sock_set_default_impl", 00:17:59.921 "params": { 00:17:59.921 "impl_name": "posix" 00:17:59.921 } 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "method": "sock_impl_set_options", 00:17:59.922 "params": { 00:17:59.922 "impl_name": "ssl", 00:17:59.922 "recv_buf_size": 4096, 00:17:59.922 "send_buf_size": 4096, 00:17:59.922 "enable_recv_pipe": true, 00:17:59.922 "enable_quickack": false, 00:17:59.922 "enable_placement_id": 0, 00:17:59.922 "enable_zerocopy_send_server": true, 00:17:59.922 "enable_zerocopy_send_client": false, 00:17:59.922 "zerocopy_threshold": 0, 00:17:59.922 "tls_version": 0, 00:17:59.922 "enable_ktls": false 00:17:59.922 } 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "method": "sock_impl_set_options", 00:17:59.922 "params": { 00:17:59.922 "impl_name": "posix", 00:17:59.922 "recv_buf_size": 2097152, 00:17:59.922 "send_buf_size": 2097152, 00:17:59.922 "enable_recv_pipe": true, 00:17:59.922 "enable_quickack": false, 00:17:59.922 "enable_placement_id": 0, 00:17:59.922 "enable_zerocopy_send_server": true, 00:17:59.922 "enable_zerocopy_send_client": false, 00:17:59.922 "zerocopy_threshold": 0, 00:17:59.922 "tls_version": 0, 00:17:59.922 "enable_ktls": false 00:17:59.922 } 00:17:59.922 } 00:17:59.922 ] 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "subsystem": "vmd", 00:17:59.922 "config": [] 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "subsystem": "accel", 00:17:59.922 "config": [ 00:17:59.922 { 00:17:59.922 "method": "accel_set_options", 00:17:59.922 "params": { 00:17:59.922 "small_cache_size": 128, 00:17:59.922 "large_cache_size": 16, 00:17:59.922 "task_count": 2048, 00:17:59.922 "sequence_count": 2048, 00:17:59.922 "buf_count": 2048 00:17:59.922 } 00:17:59.922 } 00:17:59.922 ] 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "subsystem": "bdev", 00:17:59.922 "config": [ 00:17:59.922 { 00:17:59.922 "method": "bdev_set_options", 00:17:59.922 "params": { 00:17:59.922 "bdev_io_pool_size": 65535, 00:17:59.922 "bdev_io_cache_size": 256, 00:17:59.922 "bdev_auto_examine": true, 00:17:59.922 "iobuf_small_cache_size": 128, 00:17:59.922 "iobuf_large_cache_size": 16 00:17:59.922 } 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "method": "bdev_raid_set_options", 00:17:59.922 "params": { 00:17:59.922 "process_window_size_kb": 1024 00:17:59.922 } 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "method": "bdev_iscsi_set_options", 00:17:59.922 "params": { 00:17:59.922 "timeout_sec": 30 00:17:59.922 } 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "method": "bdev_nvme_set_options", 00:17:59.922 "params": { 00:17:59.922 "action_on_timeout": "none", 00:17:59.922 "timeout_us": 0, 00:17:59.922 "timeout_admin_us": 0, 00:17:59.922 "keep_alive_timeout_ms": 10000, 00:17:59.922 "arbitration_burst": 0, 00:17:59.922 "low_priority_weight": 0, 00:17:59.922 "medium_priority_weight": 0, 00:17:59.922 "high_priority_weight": 0, 00:17:59.922 "nvme_adminq_poll_period_us": 10000, 00:17:59.922 "nvme_ioq_poll_period_us": 0, 00:17:59.922 "io_queue_requests": 0, 00:17:59.922 "delay_cmd_submit": true, 00:17:59.922 "transport_retry_count": 4, 00:17:59.922 "bdev_retry_count": 3, 00:17:59.922 "transport_ack_timeout": 0, 00:17:59.922 "ctrlr_loss_timeout_sec": 0, 00:17:59.922 "reconnect_delay_sec": 0, 00:17:59.922 "fast_io_fail_timeout_sec": 0, 00:17:59.922 "disable_auto_failback": false, 00:17:59.922 "generate_uuids": false, 00:17:59.922 "transport_tos": 0, 00:17:59.922 "nvme_error_stat": false, 00:17:59.922 "rdma_srq_size": 0, 00:17:59.922 "io_path_stat": false, 00:17:59.922 "allow_accel_sequence": false, 00:17:59.922 "rdma_max_cq_size": 0, 00:17:59.922 "rdma_cm_event_timeout_ms": 0, 00:17:59.922 "dhchap_digests": [ 00:17:59.922 "sha256", 00:17:59.922 "sha384", 00:17:59.922 "sha512" 00:17:59.922 ], 00:17:59.922 "dhchap_dhgroups": [ 00:17:59.922 "null", 00:17:59.922 "ffdhe2048", 00:17:59.922 "ffdhe3072", 00:17:59.922 "ffdhe4096", 00:17:59.922 "ffdhe6144", 00:17:59.922 "ffdhe8192" 00:17:59.922 ] 00:17:59.922 } 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "method": "bdev_nvme_set_hotplug", 00:17:59.922 "params": { 00:17:59.922 "period_us": 100000, 00:17:59.922 "enable": false 00:17:59.922 } 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "method": "bdev_malloc_create", 00:17:59.922 "params": { 00:17:59.922 "name": "malloc0", 00:17:59.922 "num_blocks": 8192, 00:17:59.922 "block_size": 4096, 00:17:59.922 "physical_block_size": 4096, 00:17:59.922 "uuid": "6805da3d-3686-4e2a-937b-6221e4fadfb7", 00:17:59.922 "optimal_io_boundary": 0 00:17:59.922 } 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "method": "bdev_wait_for_examine" 00:17:59.922 } 00:17:59.922 ] 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "subsystem": "nbd", 00:17:59.922 "config": [] 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "subsystem": "scheduler", 00:17:59.922 "config": [ 00:17:59.922 { 00:17:59.922 "method": "framework_set_scheduler", 00:17:59.922 "params": { 00:17:59.922 "name": "static" 00:17:59.922 } 00:17:59.922 } 00:17:59.922 ] 00:17:59.922 }, 00:17:59.922 { 00:17:59.922 "subsystem": "nvmf", 00:17:59.922 "config": [ 00:17:59.922 { 00:17:59.922 "method": "nvmf_set_config", 00:17:59.922 "params": { 00:17:59.922 "discovery_filter": "match_any", 00:17:59.922 "admin_cmd_passthru": { 00:17:59.922 "identify_ctrlr": false 00:17:59.922 } 00:17:59.922 } 00:17:59.923 }, 00:17:59.923 { 00:17:59.923 "method": "nvmf_set_max_subsystems", 00:17:59.923 "params": { 00:17:59.923 "max_subsystems": 1024 00:17:59.923 } 00:17:59.923 }, 00:17:59.923 { 00:17:59.923 "method": "nvmf_set_crdt", 00:17:59.923 "params": { 00:17:59.923 "crdt1": 0, 00:17:59.923 "crdt2": 0, 00:17:59.923 "crdt3": 0 00:17:59.923 } 00:17:59.923 }, 00:17:59.923 { 00:17:59.923 "method": "nvmf_create_transport", 00:17:59.923 "params": { 00:17:59.923 "trtype": "TCP", 00:17:59.923 "max_queue_depth": 128, 00:17:59.923 "max_io_qpairs_per_ctrlr": 127, 00:17:59.923 "in_capsule_data_size": 4096, 00:17:59.923 "max_io_size": 131072, 00:17:59.923 "io_unit_size": 131072, 00:17:59.923 "max_aq_depth": 128, 00:17:59.923 "num_shared_buffers": 511, 00:17:59.923 "buf_cache_size": 4294967295, 00:17:59.923 "dif_insert_or_strip": false, 00:17:59.923 "zcopy": false, 00:17:59.923 "c2h_success": false, 00:17:59.923 "sock_priority": 0, 00:17:59.923 "abort_timeout_sec": 1, 00:17:59.923 "ack_timeout": 0, 00:17:59.923 "data_wr_pool_size": 0 00:17:59.923 } 00:17:59.923 }, 00:17:59.923 { 00:17:59.923 "method": "nvmf_create_subsystem", 00:17:59.923 "params": { 00:17:59.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.923 "allow_any_host": false, 00:17:59.923 "serial_number": "SPDK00000000000001", 00:17:59.923 "model_number": "SPDK bdev Controller", 00:17:59.923 "max_namespaces": 10, 00:17:59.923 "min_cntlid": 1, 00:17:59.923 "max_cntlid": 65519, 00:17:59.923 "ana_reporting": false 00:17:59.923 } 00:17:59.923 }, 00:17:59.923 { 00:17:59.923 "method": "nvmf_subsystem_add_host", 00:17:59.923 "params": { 00:17:59.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.923 "host": "nqn.2016-06.io.spdk:host1", 00:17:59.923 "psk": "/tmp/tmp.myCErdnqKM" 00:17:59.923 } 00:17:59.923 }, 00:17:59.923 { 00:17:59.923 "method": "nvmf_subsystem_add_ns", 00:17:59.923 "params": { 00:17:59.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.923 "namespace": { 00:17:59.923 "nsid": 1, 00:17:59.923 "bdev_name": "malloc0", 00:17:59.923 "nguid": "6805DA3D36864E2A937B6221E4FADFB7", 00:17:59.923 "uuid": "6805da3d-3686-4e2a-937b-6221e4fadfb7", 00:17:59.923 "no_auto_visible": false 00:17:59.923 } 00:17:59.923 } 00:17:59.923 }, 00:17:59.923 { 00:17:59.923 "method": "nvmf_subsystem_add_listener", 00:17:59.923 "params": { 00:17:59.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.923 "listen_address": { 00:17:59.923 "trtype": "TCP", 00:17:59.923 "adrfam": "IPv4", 00:17:59.923 "traddr": "10.0.0.2", 00:17:59.923 "trsvcid": "4420" 00:17:59.923 }, 00:17:59.923 "secure_channel": true 00:17:59.923 } 00:17:59.923 } 00:17:59.923 ] 00:17:59.923 } 00:17:59.923 ] 00:17:59.923 }' 00:17:59.923 22:41:43 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:00.182 "subsystems": [ 00:18:00.182 { 00:18:00.182 "subsystem": "keyring", 00:18:00.182 "config": [] 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "subsystem": "iobuf", 00:18:00.182 "config": [ 00:18:00.182 { 00:18:00.182 "method": "iobuf_set_options", 00:18:00.182 "params": { 00:18:00.182 "small_pool_count": 8192, 00:18:00.182 "large_pool_count": 1024, 00:18:00.182 "small_bufsize": 8192, 00:18:00.182 "large_bufsize": 135168 00:18:00.182 } 00:18:00.182 } 00:18:00.182 ] 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "subsystem": "sock", 00:18:00.182 "config": [ 00:18:00.182 { 00:18:00.182 "method": "sock_set_default_impl", 00:18:00.182 "params": { 00:18:00.182 "impl_name": "posix" 00:18:00.182 } 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "method": "sock_impl_set_options", 00:18:00.182 "params": { 00:18:00.182 "impl_name": "ssl", 00:18:00.182 "recv_buf_size": 4096, 00:18:00.182 "send_buf_size": 4096, 00:18:00.182 "enable_recv_pipe": true, 00:18:00.182 "enable_quickack": false, 00:18:00.182 "enable_placement_id": 0, 00:18:00.182 "enable_zerocopy_send_server": true, 00:18:00.182 "enable_zerocopy_send_client": false, 00:18:00.182 "zerocopy_threshold": 0, 00:18:00.182 "tls_version": 0, 00:18:00.182 "enable_ktls": false 00:18:00.182 } 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "method": "sock_impl_set_options", 00:18:00.182 "params": { 00:18:00.182 "impl_name": "posix", 00:18:00.182 "recv_buf_size": 2097152, 00:18:00.182 "send_buf_size": 2097152, 00:18:00.182 "enable_recv_pipe": true, 00:18:00.182 "enable_quickack": false, 00:18:00.182 "enable_placement_id": 0, 00:18:00.182 "enable_zerocopy_send_server": true, 00:18:00.182 "enable_zerocopy_send_client": false, 00:18:00.182 "zerocopy_threshold": 0, 00:18:00.182 "tls_version": 0, 00:18:00.182 "enable_ktls": false 00:18:00.182 } 00:18:00.182 } 00:18:00.182 ] 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "subsystem": "vmd", 00:18:00.182 "config": [] 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "subsystem": "accel", 00:18:00.182 "config": [ 00:18:00.182 { 00:18:00.182 "method": "accel_set_options", 00:18:00.182 "params": { 00:18:00.182 "small_cache_size": 128, 00:18:00.182 "large_cache_size": 16, 00:18:00.182 "task_count": 2048, 00:18:00.182 "sequence_count": 2048, 00:18:00.182 "buf_count": 2048 00:18:00.182 } 00:18:00.182 } 00:18:00.182 ] 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "subsystem": "bdev", 00:18:00.182 "config": [ 00:18:00.182 { 00:18:00.182 "method": "bdev_set_options", 00:18:00.182 "params": { 00:18:00.182 "bdev_io_pool_size": 65535, 00:18:00.182 "bdev_io_cache_size": 256, 00:18:00.182 "bdev_auto_examine": true, 00:18:00.182 "iobuf_small_cache_size": 128, 00:18:00.182 "iobuf_large_cache_size": 16 00:18:00.182 } 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "method": "bdev_raid_set_options", 00:18:00.182 "params": { 00:18:00.182 "process_window_size_kb": 1024 00:18:00.182 } 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "method": "bdev_iscsi_set_options", 00:18:00.182 "params": { 00:18:00.182 "timeout_sec": 30 00:18:00.182 } 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "method": "bdev_nvme_set_options", 00:18:00.182 "params": { 00:18:00.182 "action_on_timeout": "none", 00:18:00.182 "timeout_us": 0, 00:18:00.182 "timeout_admin_us": 0, 00:18:00.182 "keep_alive_timeout_ms": 10000, 00:18:00.182 "arbitration_burst": 0, 00:18:00.182 "low_priority_weight": 0, 00:18:00.182 "medium_priority_weight": 0, 00:18:00.182 "high_priority_weight": 0, 00:18:00.182 "nvme_adminq_poll_period_us": 10000, 00:18:00.182 "nvme_ioq_poll_period_us": 0, 00:18:00.182 "io_queue_requests": 512, 00:18:00.182 "delay_cmd_submit": true, 00:18:00.182 "transport_retry_count": 4, 00:18:00.182 "bdev_retry_count": 3, 00:18:00.182 "transport_ack_timeout": 0, 00:18:00.182 "ctrlr_loss_timeout_sec": 0, 00:18:00.182 "reconnect_delay_sec": 0, 00:18:00.182 "fast_io_fail_timeout_sec": 0, 00:18:00.182 "disable_auto_failback": false, 00:18:00.182 "generate_uuids": false, 00:18:00.182 "transport_tos": 0, 00:18:00.182 "nvme_error_stat": false, 00:18:00.182 "rdma_srq_size": 0, 00:18:00.182 "io_path_stat": false, 00:18:00.182 "allow_accel_sequence": false, 00:18:00.182 "rdma_max_cq_size": 0, 00:18:00.182 "rdma_cm_event_timeout_ms": 0, 00:18:00.182 "dhchap_digests": [ 00:18:00.182 "sha256", 00:18:00.182 "sha384", 00:18:00.182 "sha512" 00:18:00.182 ], 00:18:00.182 "dhchap_dhgroups": [ 00:18:00.182 "null", 00:18:00.182 "ffdhe2048", 00:18:00.182 "ffdhe3072", 00:18:00.182 "ffdhe4096", 00:18:00.182 "ffdhe6144", 00:18:00.182 "ffdhe8192" 00:18:00.182 ] 00:18:00.182 } 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "method": "bdev_nvme_attach_controller", 00:18:00.182 "params": { 00:18:00.182 "name": "TLSTEST", 00:18:00.182 "trtype": "TCP", 00:18:00.182 "adrfam": "IPv4", 00:18:00.182 "traddr": "10.0.0.2", 00:18:00.182 "trsvcid": "4420", 00:18:00.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.182 "prchk_reftag": false, 00:18:00.182 "prchk_guard": false, 00:18:00.182 "ctrlr_loss_timeout_sec": 0, 00:18:00.182 "reconnect_delay_sec": 0, 00:18:00.182 "fast_io_fail_timeout_sec": 0, 00:18:00.182 "psk": "/tmp/tmp.myCErdnqKM", 00:18:00.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.182 "hdgst": false, 00:18:00.182 "ddgst": false 00:18:00.182 } 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "method": "bdev_nvme_set_hotplug", 00:18:00.182 "params": { 00:18:00.182 "period_us": 100000, 00:18:00.182 "enable": false 00:18:00.182 } 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "method": "bdev_wait_for_examine" 00:18:00.182 } 00:18:00.182 ] 00:18:00.182 }, 00:18:00.182 { 00:18:00.182 "subsystem": "nbd", 00:18:00.182 "config": [] 00:18:00.182 } 00:18:00.182 ] 00:18:00.182 }' 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1281933 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1281933 ']' 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1281933 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1281933 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1281933' 00:18:00.182 killing process with pid 1281933 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1281933 00:18:00.182 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.182 00:18:00.182 Latency(us) 00:18:00.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.182 =================================================================================================================== 00:18:00.182 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.182 [2024-07-15 22:41:43.657318] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:00.182 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1281933 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1281738 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1281738 ']' 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1281738 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1281738 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1281738' 00:18:00.440 killing process with pid 1281738 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1281738 00:18:00.440 [2024-07-15 22:41:43.937282] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:00.440 22:41:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1281738 00:18:01.006 22:41:44 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:01.006 22:41:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.006 22:41:44 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:01.006 "subsystems": [ 00:18:01.006 { 00:18:01.006 "subsystem": "keyring", 00:18:01.006 "config": [] 00:18:01.006 }, 00:18:01.006 { 00:18:01.006 "subsystem": "iobuf", 00:18:01.006 "config": [ 00:18:01.006 { 00:18:01.006 "method": "iobuf_set_options", 00:18:01.006 "params": { 00:18:01.006 "small_pool_count": 8192, 00:18:01.006 "large_pool_count": 1024, 00:18:01.006 "small_bufsize": 8192, 00:18:01.006 "large_bufsize": 135168 00:18:01.006 } 00:18:01.006 } 00:18:01.006 ] 00:18:01.006 }, 00:18:01.006 { 00:18:01.006 "subsystem": "sock", 00:18:01.006 "config": [ 00:18:01.007 { 00:18:01.007 "method": "sock_set_default_impl", 00:18:01.007 "params": { 00:18:01.007 "impl_name": "posix" 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "sock_impl_set_options", 00:18:01.007 "params": { 00:18:01.007 "impl_name": "ssl", 00:18:01.007 "recv_buf_size": 4096, 00:18:01.007 "send_buf_size": 4096, 00:18:01.007 "enable_recv_pipe": true, 00:18:01.007 "enable_quickack": false, 00:18:01.007 "enable_placement_id": 0, 00:18:01.007 "enable_zerocopy_send_server": true, 00:18:01.007 "enable_zerocopy_send_client": false, 00:18:01.007 "zerocopy_threshold": 0, 00:18:01.007 "tls_version": 0, 00:18:01.007 "enable_ktls": false 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "sock_impl_set_options", 00:18:01.007 "params": { 00:18:01.007 "impl_name": "posix", 00:18:01.007 "recv_buf_size": 2097152, 00:18:01.007 "send_buf_size": 2097152, 00:18:01.007 "enable_recv_pipe": true, 00:18:01.007 "enable_quickack": false, 00:18:01.007 "enable_placement_id": 0, 00:18:01.007 "enable_zerocopy_send_server": true, 00:18:01.007 "enable_zerocopy_send_client": false, 00:18:01.007 "zerocopy_threshold": 0, 00:18:01.007 "tls_version": 0, 00:18:01.007 "enable_ktls": false 00:18:01.007 } 00:18:01.007 } 00:18:01.007 ] 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "subsystem": "vmd", 00:18:01.007 "config": [] 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "subsystem": "accel", 00:18:01.007 "config": [ 00:18:01.007 { 00:18:01.007 "method": "accel_set_options", 00:18:01.007 "params": { 00:18:01.007 "small_cache_size": 128, 00:18:01.007 "large_cache_size": 16, 00:18:01.007 "task_count": 2048, 00:18:01.007 "sequence_count": 2048, 00:18:01.007 "buf_count": 2048 00:18:01.007 } 00:18:01.007 } 00:18:01.007 ] 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "subsystem": "bdev", 00:18:01.007 "config": [ 00:18:01.007 { 00:18:01.007 "method": "bdev_set_options", 00:18:01.007 "params": { 00:18:01.007 "bdev_io_pool_size": 65535, 00:18:01.007 "bdev_io_cache_size": 256, 00:18:01.007 "bdev_auto_examine": true, 00:18:01.007 "iobuf_small_cache_size": 128, 00:18:01.007 "iobuf_large_cache_size": 16 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "bdev_raid_set_options", 00:18:01.007 "params": { 00:18:01.007 "process_window_size_kb": 1024 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "bdev_iscsi_set_options", 00:18:01.007 "params": { 00:18:01.007 "timeout_sec": 30 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "bdev_nvme_set_options", 00:18:01.007 "params": { 00:18:01.007 "action_on_timeout": "none", 00:18:01.007 "timeout_us": 0, 00:18:01.007 "timeout_admin_us": 0, 00:18:01.007 "keep_alive_timeout_ms": 10000, 00:18:01.007 "arbitration_burst": 0, 00:18:01.007 "low_priority_weight": 0, 00:18:01.007 "medium_priority_weight": 0, 00:18:01.007 "high_priority_weight": 0, 00:18:01.007 "nvme_adminq_poll_period_us": 10000, 00:18:01.007 "nvme_ioq_poll_period_us": 0, 00:18:01.007 "io_queue_requests": 0, 00:18:01.007 "delay_cmd_submit": true, 00:18:01.007 "transport_retry_count": 4, 00:18:01.007 "bdev_retry_count": 3, 00:18:01.007 "transport_ack_timeout": 0, 00:18:01.007 "ctrlr_loss_timeout_sec": 0, 00:18:01.007 "reconnect_delay_sec": 0, 00:18:01.007 "fast_io_fail_timeout_sec": 0, 00:18:01.007 "disable_auto_failback": false, 00:18:01.007 "generate_uuids": false, 00:18:01.007 "transport_tos": 0, 00:18:01.007 "nvme_error_stat": false, 00:18:01.007 "rdma_srq_size": 0, 00:18:01.007 "io_path_stat": false, 00:18:01.007 "allow_accel_sequence": false, 00:18:01.007 "rdma_max_cq_size": 0, 00:18:01.007 "rdma_cm_event_timeout_ms": 0, 00:18:01.007 "dhchap_digests": [ 00:18:01.007 "sha256", 00:18:01.007 "sha384", 00:18:01.007 "sha512" 00:18:01.007 ], 00:18:01.007 "dhchap_dhgroups": [ 00:18:01.007 "null", 00:18:01.007 "ffdhe2048", 00:18:01.007 "ffdhe3072", 00:18:01.007 "ffdhe4096", 00:18:01.007 "ffdhe6144", 00:18:01.007 "ffdhe8192" 00:18:01.007 ] 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "bdev_nvme_set_hotplug", 00:18:01.007 "params": { 00:18:01.007 "period_us": 100000, 00:18:01.007 "enable": false 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "bdev_malloc_create", 00:18:01.007 "params": { 00:18:01.007 "name": "malloc0", 00:18:01.007 "num_blocks": 8192, 00:18:01.007 "block_size": 4096, 00:18:01.007 "physical_block_size": 4096, 00:18:01.007 "uuid": "6805da3d-3686-4e2a-937b-6221e4fadfb7", 00:18:01.007 "optimal_io_boundary": 0 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "bdev_wait_for_examine" 00:18:01.007 } 00:18:01.007 ] 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "subsystem": "nbd", 00:18:01.007 "config": [] 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "subsystem": "scheduler", 00:18:01.007 "config": [ 00:18:01.007 { 00:18:01.007 "method": "framework_set_scheduler", 00:18:01.007 "params": { 00:18:01.007 "name": "static" 00:18:01.007 } 00:18:01.007 } 00:18:01.007 ] 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "subsystem": "nvmf", 00:18:01.007 "config": [ 00:18:01.007 { 00:18:01.007 "method": "nvmf_set_config", 00:18:01.007 "params": { 00:18:01.007 "discovery_filter": "match_any", 00:18:01.007 "admin_cmd_passthru": { 00:18:01.007 "identify_ctrlr": false 00:18:01.007 } 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "nvmf_set_max_subsystems", 00:18:01.007 "params": { 00:18:01.007 "max_subsystems": 1024 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "nvmf_set_crdt", 00:18:01.007 "params": { 00:18:01.007 "crdt1": 0, 00:18:01.007 "crdt2": 0, 00:18:01.007 "crdt3": 0 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "nvmf_create_transport", 00:18:01.007 "params": { 00:18:01.007 "trtype": "TCP", 00:18:01.007 "max_queue_depth": 128, 00:18:01.007 "max_io_qpairs_per_ctrlr": 127, 00:18:01.007 "in_capsule_data_size": 4096, 00:18:01.007 "max_io_size": 131072, 00:18:01.007 "io_unit_size": 131072, 00:18:01.007 "max_aq_depth": 128, 00:18:01.007 "num_shared_buffers": 511, 00:18:01.007 "buf_cache_size": 4294967295, 00:18:01.007 "dif_insert_or_strip": false, 00:18:01.007 "zcopy": false, 00:18:01.007 "c2h_success": false, 00:18:01.007 "sock_priority": 0, 00:18:01.007 "abort_timeout_sec": 1, 00:18:01.007 "ack_timeout": 0, 00:18:01.007 "data_wr_pool_size": 0 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "nvmf_create_subsystem", 00:18:01.007 "params": { 00:18:01.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.007 "allow_any_host": false, 00:18:01.007 "serial_number": "SPDK00000000000001", 00:18:01.007 "model_number": "SPDK bdev Controller", 00:18:01.007 "max_namespaces": 10, 00:18:01.007 "min_cntlid": 1, 00:18:01.007 "max_cntlid": 65519, 00:18:01.007 "ana_reporting": false 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "nvmf_subsystem_add_host", 00:18:01.007 "params": { 00:18:01.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.007 "host": "nqn.2016-06.io.spdk:host1", 00:18:01.007 "psk": "/tmp/tmp.myCErdnqKM" 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "nvmf_subsystem_add_ns", 00:18:01.007 "params": { 00:18:01.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.007 "namespace": { 00:18:01.007 "nsid": 1, 00:18:01.007 "bdev_name": "malloc0", 00:18:01.007 "nguid": "6805DA3D36864E2A937B6221E4FADFB7", 00:18:01.007 "uuid": "6805da3d-3686-4e2a-937b-6221e4fadfb7", 00:18:01.007 "no_auto_visible": false 00:18:01.007 } 00:18:01.007 } 00:18:01.007 }, 00:18:01.007 { 00:18:01.007 "method": "nvmf_subsystem_add_listener", 00:18:01.007 "params": { 00:18:01.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.008 "listen_address": { 00:18:01.008 "trtype": "TCP", 00:18:01.008 "adrfam": "IPv4", 00:18:01.008 "traddr": "10.0.0.2", 00:18:01.008 "trsvcid": "4420" 00:18:01.008 }, 00:18:01.008 "secure_channel": true 00:18:01.008 } 00:18:01.008 } 00:18:01.008 ] 00:18:01.008 } 00:18:01.008 ] 00:18:01.008 }' 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1282184 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1282184 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1282184 ']' 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:01.008 22:41:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.008 [2024-07-15 22:41:44.287543] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:01.008 [2024-07-15 22:41:44.287628] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.008 [2024-07-15 22:41:44.355774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.008 [2024-07-15 22:41:44.468328] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.008 [2024-07-15 22:41:44.468392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.008 [2024-07-15 22:41:44.468408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.008 [2024-07-15 22:41:44.468421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.008 [2024-07-15 22:41:44.468433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.008 [2024-07-15 22:41:44.468525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.267 [2024-07-15 22:41:44.713474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.267 [2024-07-15 22:41:44.729436] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:01.267 [2024-07-15 22:41:44.745481] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.267 [2024-07-15 22:41:44.760119] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1282337 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1282337 /var/tmp/bdevperf.sock 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1282337 ']' 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:01.833 22:41:45 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:01.833 "subsystems": [ 00:18:01.833 { 00:18:01.833 "subsystem": "keyring", 00:18:01.833 "config": [] 00:18:01.833 }, 00:18:01.833 { 00:18:01.833 "subsystem": "iobuf", 00:18:01.833 "config": [ 00:18:01.833 { 00:18:01.833 "method": "iobuf_set_options", 00:18:01.833 "params": { 00:18:01.833 "small_pool_count": 8192, 00:18:01.833 "large_pool_count": 1024, 00:18:01.833 "small_bufsize": 8192, 00:18:01.833 "large_bufsize": 135168 00:18:01.833 } 00:18:01.833 } 00:18:01.833 ] 00:18:01.833 }, 00:18:01.833 { 00:18:01.833 "subsystem": "sock", 00:18:01.833 "config": [ 00:18:01.833 { 00:18:01.833 "method": "sock_set_default_impl", 00:18:01.833 "params": { 00:18:01.833 "impl_name": "posix" 00:18:01.833 } 00:18:01.833 }, 00:18:01.833 { 00:18:01.833 "method": "sock_impl_set_options", 00:18:01.833 "params": { 00:18:01.833 "impl_name": "ssl", 00:18:01.833 "recv_buf_size": 4096, 00:18:01.833 "send_buf_size": 4096, 00:18:01.833 "enable_recv_pipe": true, 00:18:01.833 "enable_quickack": false, 00:18:01.833 "enable_placement_id": 0, 00:18:01.833 "enable_zerocopy_send_server": true, 00:18:01.833 "enable_zerocopy_send_client": false, 00:18:01.833 "zerocopy_threshold": 0, 00:18:01.833 "tls_version": 0, 00:18:01.833 "enable_ktls": false 00:18:01.833 } 00:18:01.833 }, 00:18:01.833 { 00:18:01.833 "method": "sock_impl_set_options", 00:18:01.833 "params": { 00:18:01.833 "impl_name": "posix", 00:18:01.833 "recv_buf_size": 2097152, 00:18:01.833 "send_buf_size": 2097152, 00:18:01.833 "enable_recv_pipe": true, 00:18:01.833 "enable_quickack": false, 00:18:01.833 "enable_placement_id": 0, 00:18:01.833 "enable_zerocopy_send_server": true, 00:18:01.833 "enable_zerocopy_send_client": false, 00:18:01.833 "zerocopy_threshold": 0, 00:18:01.833 "tls_version": 0, 00:18:01.833 "enable_ktls": false 00:18:01.833 } 00:18:01.833 } 00:18:01.833 ] 00:18:01.833 }, 00:18:01.833 { 00:18:01.833 "subsystem": "vmd", 00:18:01.833 "config": [] 00:18:01.833 }, 00:18:01.833 { 00:18:01.833 "subsystem": "accel", 00:18:01.833 "config": [ 00:18:01.833 { 00:18:01.833 "method": "accel_set_options", 00:18:01.833 "params": { 00:18:01.833 "small_cache_size": 128, 00:18:01.833 "large_cache_size": 16, 00:18:01.833 "task_count": 2048, 00:18:01.833 "sequence_count": 2048, 00:18:01.833 "buf_count": 2048 00:18:01.833 } 00:18:01.833 } 00:18:01.833 ] 00:18:01.833 }, 00:18:01.833 { 00:18:01.833 "subsystem": "bdev", 00:18:01.833 "config": [ 00:18:01.833 { 00:18:01.833 "method": "bdev_set_options", 00:18:01.833 "params": { 00:18:01.833 "bdev_io_pool_size": 65535, 00:18:01.833 "bdev_io_cache_size": 256, 00:18:01.833 "bdev_auto_examine": true, 00:18:01.833 "iobuf_small_cache_size": 128, 00:18:01.833 "iobuf_large_cache_size": 16 00:18:01.833 } 00:18:01.833 }, 00:18:01.833 { 00:18:01.833 "method": "bdev_raid_set_options", 00:18:01.833 "params": { 00:18:01.833 "process_window_size_kb": 1024 00:18:01.833 } 00:18:01.833 }, 00:18:01.833 { 00:18:01.833 "method": "bdev_iscsi_set_options", 00:18:01.833 "params": { 00:18:01.833 "timeout_sec": 30 00:18:01.833 } 00:18:01.833 }, 00:18:01.833 { 00:18:01.833 "method": "bdev_nvme_set_options", 00:18:01.833 "params": { 00:18:01.833 "action_on_timeout": "none", 00:18:01.833 "timeout_us": 0, 00:18:01.833 "timeout_admin_us": 0, 00:18:01.833 "keep_alive_timeout_ms": 10000, 00:18:01.833 "arbitration_burst": 0, 00:18:01.833 "low_priority_weight": 0, 00:18:01.833 "medium_priority_weight": 0, 00:18:01.833 "high_priority_weight": 0, 00:18:01.833 "nvme_adminq_poll_period_us": 10000, 00:18:01.833 "nvme_ioq_poll_period_us": 0, 00:18:01.833 "io_queue_requests": 512, 00:18:01.833 "delay_cmd_submit": true, 00:18:01.833 "transport_retry_count": 4, 00:18:01.833 "bdev_retry_count": 3, 00:18:01.833 "transport_ack_timeout": 0, 00:18:01.833 "ctrlr_loss_timeout_sec": 0, 00:18:01.833 "reconnect_delay_sec": 0, 00:18:01.833 "fast_io_fail_timeout_sec": 0, 00:18:01.833 "disable_auto_failback": false, 00:18:01.833 "generate_uuids": false, 00:18:01.833 "transport_tos": 0, 00:18:01.833 "nvme_error_stat": false, 00:18:01.834 "rdma_srq_size": 0, 00:18:01.834 "io_path_stat": false, 00:18:01.834 "allow_accel_sequence": false, 00:18:01.834 "rdma_max_cq_size": 0, 00:18:01.834 "rdma_cm_event_timeout_ms": 0, 00:18:01.834 "dhchap_digests": [ 00:18:01.834 "sha256", 00:18:01.834 "sha384", 00:18:01.834 "sha512" 00:18:01.834 ], 00:18:01.834 "dhchap_dhgroups": [ 00:18:01.834 "null", 00:18:01.834 "ffdhe2048", 00:18:01.834 "ffdhe3072", 00:18:01.834 "ffdhe4096", 00:18:01.834 "ffd 22:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.834 he6144", 00:18:01.834 "ffdhe8192" 00:18:01.834 ] 00:18:01.834 } 00:18:01.834 }, 00:18:01.834 { 00:18:01.834 "method": "bdev_nvme_attach_controller", 00:18:01.834 "params": { 00:18:01.834 "name": "TLSTEST", 00:18:01.834 "trtype": "TCP", 00:18:01.834 "adrfam": "IPv4", 00:18:01.834 "traddr": "10.0.0.2", 00:18:01.834 "trsvcid": "4420", 00:18:01.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.834 "prchk_reftag": false, 00:18:01.834 "prchk_guard": false, 00:18:01.834 "ctrlr_loss_timeout_sec": 0, 00:18:01.834 "reconnect_delay_sec": 0, 00:18:01.834 "fast_io_fail_timeout_sec": 0, 00:18:01.834 "psk": "/tmp/tmp.myCErdnqKM", 00:18:01.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.834 "hdgst": false, 00:18:01.834 "ddgst": false 00:18:01.834 } 00:18:01.834 }, 00:18:01.834 { 00:18:01.834 "method": "bdev_nvme_set_hotplug", 00:18:01.834 "params": { 00:18:01.834 "period_us": 100000, 00:18:01.834 "enable": false 00:18:01.834 } 00:18:01.834 }, 00:18:01.834 { 00:18:01.834 "method": "bdev_wait_for_examine" 00:18:01.834 } 00:18:01.834 ] 00:18:01.834 }, 00:18:01.834 { 00:18:01.834 "subsystem": "nbd", 00:18:01.834 "config": [] 00:18:01.834 } 00:18:01.834 ] 00:18:01.834 }' 00:18:01.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.834 22:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:01.834 22:41:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.834 [2024-07-15 22:41:45.296758] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:01.834 [2024-07-15 22:41:45.296833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282337 ] 00:18:02.092 [2024-07-15 22:41:45.354987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.092 [2024-07-15 22:41:45.461611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.350 [2024-07-15 22:41:45.632049] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.350 [2024-07-15 22:41:45.632212] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:02.924 22:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:02.924 22:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:02.925 22:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:02.925 Running I/O for 10 seconds... 00:18:15.124 00:18:15.124 Latency(us) 00:18:15.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.125 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.125 Verification LBA range: start 0x0 length 0x2000 00:18:15.125 TLSTESTn1 : 10.07 1698.56 6.63 0.00 0.00 75126.16 6602.15 107187.77 00:18:15.125 =================================================================================================================== 00:18:15.125 Total : 1698.56 6.63 0.00 0.00 75126.16 6602.15 107187.77 00:18:15.125 0 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1282337 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1282337 ']' 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1282337 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1282337 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1282337' 00:18:15.125 killing process with pid 1282337 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1282337 00:18:15.125 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.125 00:18:15.125 Latency(us) 00:18:15.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.125 =================================================================================================================== 00:18:15.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.125 [2024-07-15 22:41:56.505690] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1282337 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1282184 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1282184 ']' 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1282184 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1282184 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1282184' 00:18:15.125 killing process with pid 1282184 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1282184 00:18:15.125 [2024-07-15 22:41:56.799173] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:15.125 22:41:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1282184 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1283669 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1283669 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1283669 ']' 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.125 [2024-07-15 22:41:57.136629] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:15.125 [2024-07-15 22:41:57.136712] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.125 [2024-07-15 22:41:57.201200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.125 [2024-07-15 22:41:57.308524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.125 [2024-07-15 22:41:57.308579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.125 [2024-07-15 22:41:57.308607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.125 [2024-07-15 22:41:57.308618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.125 [2024-07-15 22:41:57.308628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.125 [2024-07-15 22:41:57.308660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.myCErdnqKM 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.myCErdnqKM 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:15.125 [2024-07-15 22:41:57.723597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:15.125 22:41:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:15.125 [2024-07-15 22:41:58.216897] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.125 [2024-07-15 22:41:58.217115] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.125 22:41:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:15.125 malloc0 00:18:15.125 22:41:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:15.382 22:41:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.myCErdnqKM 00:18:15.641 [2024-07-15 22:41:59.011043] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:15.641 22:41:59 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1283951 00:18:15.641 22:41:59 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:15.641 22:41:59 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.641 22:41:59 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1283951 /var/tmp/bdevperf.sock 00:18:15.641 22:41:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1283951 ']' 00:18:15.641 22:41:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.641 22:41:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:15.641 22:41:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.641 22:41:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:15.641 22:41:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.641 [2024-07-15 22:41:59.070851] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:15.641 [2024-07-15 22:41:59.070940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283951 ] 00:18:15.641 [2024-07-15 22:41:59.128378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.935 [2024-07-15 22:41:59.239804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.935 22:41:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:15.935 22:41:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:15.935 22:41:59 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.myCErdnqKM 00:18:16.193 22:41:59 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:16.451 [2024-07-15 22:41:59.841981] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.451 nvme0n1 00:18:16.451 22:41:59 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:16.709 Running I/O for 1 seconds... 00:18:17.643 00:18:17.643 Latency(us) 00:18:17.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.643 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:17.643 Verification LBA range: start 0x0 length 0x2000 00:18:17.643 nvme0n1 : 1.07 1617.46 6.32 0.00 0.00 77114.73 6456.51 107187.77 00:18:17.643 =================================================================================================================== 00:18:17.643 Total : 1617.46 6.32 0.00 0.00 77114.73 6456.51 107187.77 00:18:17.643 0 00:18:17.643 22:42:01 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1283951 00:18:17.643 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1283951 ']' 00:18:17.643 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1283951 00:18:17.643 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:17.643 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:17.643 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1283951 00:18:17.901 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:17.901 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:17.901 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1283951' 00:18:17.901 killing process with pid 1283951 00:18:17.901 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1283951 00:18:17.901 Received shutdown signal, test time was about 1.000000 seconds 00:18:17.901 00:18:17.901 Latency(us) 00:18:17.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.901 =================================================================================================================== 00:18:17.901 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.901 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1283951 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1283669 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1283669 ']' 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1283669 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1283669 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1283669' 00:18:18.158 killing process with pid 1283669 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1283669 00:18:18.158 [2024-07-15 22:42:01.433281] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:18.158 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1283669 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1284393 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1284393 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1284393 ']' 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:18.414 22:42:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.414 [2024-07-15 22:42:01.771579] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:18.414 [2024-07-15 22:42:01.771654] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.415 [2024-07-15 22:42:01.839278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.671 [2024-07-15 22:42:01.958408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.671 [2024-07-15 22:42:01.958476] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.671 [2024-07-15 22:42:01.958506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.671 [2024-07-15 22:42:01.958518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.671 [2024-07-15 22:42:01.958528] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.671 [2024-07-15 22:42:01.958554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.671 [2024-07-15 22:42:02.106765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.671 malloc0 00:18:18.671 [2024-07-15 22:42:02.139280] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.671 [2024-07-15 22:42:02.139541] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1284486 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1284486 /var/tmp/bdevperf.sock 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1284486 ']' 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.671 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:18.672 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.929 [2024-07-15 22:42:02.211425] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:18.929 [2024-07-15 22:42:02.211498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284486 ] 00:18:18.929 [2024-07-15 22:42:02.270104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.929 [2024-07-15 22:42:02.378615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.186 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:19.186 22:42:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:19.186 22:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.myCErdnqKM 00:18:19.442 22:42:02 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:19.699 [2024-07-15 22:42:02.985848] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.699 nvme0n1 00:18:19.699 22:42:03 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:19.699 Running I/O for 1 seconds... 00:18:21.066 00:18:21.066 Latency(us) 00:18:21.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.066 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:21.066 Verification LBA range: start 0x0 length 0x2000 00:18:21.066 nvme0n1 : 1.06 1784.75 6.97 0.00 0.00 70070.21 6553.60 104857.60 00:18:21.066 =================================================================================================================== 00:18:21.066 Total : 1784.75 6.97 0.00 0.00 70070.21 6553.60 104857.60 00:18:21.066 0 00:18:21.066 22:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:21.066 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:21.066 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.066 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:21.066 22:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:21.066 "subsystems": [ 00:18:21.066 { 00:18:21.066 "subsystem": "keyring", 00:18:21.066 "config": [ 00:18:21.066 { 00:18:21.066 "method": "keyring_file_add_key", 00:18:21.066 "params": { 00:18:21.066 "name": "key0", 00:18:21.066 "path": "/tmp/tmp.myCErdnqKM" 00:18:21.066 } 00:18:21.066 } 00:18:21.066 ] 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "subsystem": "iobuf", 00:18:21.066 "config": [ 00:18:21.066 { 00:18:21.066 "method": "iobuf_set_options", 00:18:21.066 "params": { 00:18:21.066 "small_pool_count": 8192, 00:18:21.066 "large_pool_count": 1024, 00:18:21.066 "small_bufsize": 8192, 00:18:21.066 "large_bufsize": 135168 00:18:21.066 } 00:18:21.066 } 00:18:21.066 ] 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "subsystem": "sock", 00:18:21.066 "config": [ 00:18:21.066 { 00:18:21.066 "method": "sock_set_default_impl", 00:18:21.066 "params": { 00:18:21.066 "impl_name": "posix" 00:18:21.066 } 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "method": "sock_impl_set_options", 00:18:21.066 "params": { 00:18:21.066 "impl_name": "ssl", 00:18:21.066 "recv_buf_size": 4096, 00:18:21.066 "send_buf_size": 4096, 00:18:21.066 "enable_recv_pipe": true, 00:18:21.066 "enable_quickack": false, 00:18:21.066 "enable_placement_id": 0, 00:18:21.066 "enable_zerocopy_send_server": true, 00:18:21.066 "enable_zerocopy_send_client": false, 00:18:21.066 "zerocopy_threshold": 0, 00:18:21.066 "tls_version": 0, 00:18:21.066 "enable_ktls": false 00:18:21.066 } 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "method": "sock_impl_set_options", 00:18:21.066 "params": { 00:18:21.066 "impl_name": "posix", 00:18:21.066 "recv_buf_size": 2097152, 00:18:21.066 "send_buf_size": 2097152, 00:18:21.066 "enable_recv_pipe": true, 00:18:21.066 "enable_quickack": false, 00:18:21.066 "enable_placement_id": 0, 00:18:21.066 "enable_zerocopy_send_server": true, 00:18:21.066 "enable_zerocopy_send_client": false, 00:18:21.066 "zerocopy_threshold": 0, 00:18:21.066 "tls_version": 0, 00:18:21.066 "enable_ktls": false 00:18:21.066 } 00:18:21.066 } 00:18:21.066 ] 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "subsystem": "vmd", 00:18:21.066 "config": [] 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "subsystem": "accel", 00:18:21.066 "config": [ 00:18:21.066 { 00:18:21.066 "method": "accel_set_options", 00:18:21.066 "params": { 00:18:21.066 "small_cache_size": 128, 00:18:21.066 "large_cache_size": 16, 00:18:21.066 "task_count": 2048, 00:18:21.066 "sequence_count": 2048, 00:18:21.066 "buf_count": 2048 00:18:21.066 } 00:18:21.066 } 00:18:21.066 ] 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "subsystem": "bdev", 00:18:21.066 "config": [ 00:18:21.066 { 00:18:21.066 "method": "bdev_set_options", 00:18:21.066 "params": { 00:18:21.066 "bdev_io_pool_size": 65535, 00:18:21.066 "bdev_io_cache_size": 256, 00:18:21.066 "bdev_auto_examine": true, 00:18:21.066 "iobuf_small_cache_size": 128, 00:18:21.066 "iobuf_large_cache_size": 16 00:18:21.066 } 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "method": "bdev_raid_set_options", 00:18:21.066 "params": { 00:18:21.066 "process_window_size_kb": 1024 00:18:21.066 } 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "method": "bdev_iscsi_set_options", 00:18:21.066 "params": { 00:18:21.066 "timeout_sec": 30 00:18:21.066 } 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "method": "bdev_nvme_set_options", 00:18:21.066 "params": { 00:18:21.066 "action_on_timeout": "none", 00:18:21.066 "timeout_us": 0, 00:18:21.066 "timeout_admin_us": 0, 00:18:21.066 "keep_alive_timeout_ms": 10000, 00:18:21.066 "arbitration_burst": 0, 00:18:21.066 "low_priority_weight": 0, 00:18:21.066 "medium_priority_weight": 0, 00:18:21.066 "high_priority_weight": 0, 00:18:21.066 "nvme_adminq_poll_period_us": 10000, 00:18:21.066 "nvme_ioq_poll_period_us": 0, 00:18:21.066 "io_queue_requests": 0, 00:18:21.066 "delay_cmd_submit": true, 00:18:21.066 "transport_retry_count": 4, 00:18:21.066 "bdev_retry_count": 3, 00:18:21.066 "transport_ack_timeout": 0, 00:18:21.066 "ctrlr_loss_timeout_sec": 0, 00:18:21.066 "reconnect_delay_sec": 0, 00:18:21.066 "fast_io_fail_timeout_sec": 0, 00:18:21.066 "disable_auto_failback": false, 00:18:21.066 "generate_uuids": false, 00:18:21.066 "transport_tos": 0, 00:18:21.066 "nvme_error_stat": false, 00:18:21.066 "rdma_srq_size": 0, 00:18:21.066 "io_path_stat": false, 00:18:21.066 "allow_accel_sequence": false, 00:18:21.066 "rdma_max_cq_size": 0, 00:18:21.066 "rdma_cm_event_timeout_ms": 0, 00:18:21.066 "dhchap_digests": [ 00:18:21.066 "sha256", 00:18:21.066 "sha384", 00:18:21.066 "sha512" 00:18:21.066 ], 00:18:21.066 "dhchap_dhgroups": [ 00:18:21.066 "null", 00:18:21.066 "ffdhe2048", 00:18:21.066 "ffdhe3072", 00:18:21.066 "ffdhe4096", 00:18:21.066 "ffdhe6144", 00:18:21.066 "ffdhe8192" 00:18:21.066 ] 00:18:21.066 } 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "method": "bdev_nvme_set_hotplug", 00:18:21.066 "params": { 00:18:21.066 "period_us": 100000, 00:18:21.066 "enable": false 00:18:21.066 } 00:18:21.066 }, 00:18:21.066 { 00:18:21.066 "method": "bdev_malloc_create", 00:18:21.066 "params": { 00:18:21.066 "name": "malloc0", 00:18:21.066 "num_blocks": 8192, 00:18:21.066 "block_size": 4096, 00:18:21.066 "physical_block_size": 4096, 00:18:21.066 "uuid": "e454e056-fdfb-4889-b483-78613f8e31d8", 00:18:21.066 "optimal_io_boundary": 0 00:18:21.067 } 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "method": "bdev_wait_for_examine" 00:18:21.067 } 00:18:21.067 ] 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "subsystem": "nbd", 00:18:21.067 "config": [] 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "subsystem": "scheduler", 00:18:21.067 "config": [ 00:18:21.067 { 00:18:21.067 "method": "framework_set_scheduler", 00:18:21.067 "params": { 00:18:21.067 "name": "static" 00:18:21.067 } 00:18:21.067 } 00:18:21.067 ] 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "subsystem": "nvmf", 00:18:21.067 "config": [ 00:18:21.067 { 00:18:21.067 "method": "nvmf_set_config", 00:18:21.067 "params": { 00:18:21.067 "discovery_filter": "match_any", 00:18:21.067 "admin_cmd_passthru": { 00:18:21.067 "identify_ctrlr": false 00:18:21.067 } 00:18:21.067 } 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "method": "nvmf_set_max_subsystems", 00:18:21.067 "params": { 00:18:21.067 "max_subsystems": 1024 00:18:21.067 } 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "method": "nvmf_set_crdt", 00:18:21.067 "params": { 00:18:21.067 "crdt1": 0, 00:18:21.067 "crdt2": 0, 00:18:21.067 "crdt3": 0 00:18:21.067 } 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "method": "nvmf_create_transport", 00:18:21.067 "params": { 00:18:21.067 "trtype": "TCP", 00:18:21.067 "max_queue_depth": 128, 00:18:21.067 "max_io_qpairs_per_ctrlr": 127, 00:18:21.067 "in_capsule_data_size": 4096, 00:18:21.067 "max_io_size": 131072, 00:18:21.067 "io_unit_size": 131072, 00:18:21.067 "max_aq_depth": 128, 00:18:21.067 "num_shared_buffers": 511, 00:18:21.067 "buf_cache_size": 4294967295, 00:18:21.067 "dif_insert_or_strip": false, 00:18:21.067 "zcopy": false, 00:18:21.067 "c2h_success": false, 00:18:21.067 "sock_priority": 0, 00:18:21.067 "abort_timeout_sec": 1, 00:18:21.067 "ack_timeout": 0, 00:18:21.067 "data_wr_pool_size": 0 00:18:21.067 } 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "method": "nvmf_create_subsystem", 00:18:21.067 "params": { 00:18:21.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.067 "allow_any_host": false, 00:18:21.067 "serial_number": "00000000000000000000", 00:18:21.067 "model_number": "SPDK bdev Controller", 00:18:21.067 "max_namespaces": 32, 00:18:21.067 "min_cntlid": 1, 00:18:21.067 "max_cntlid": 65519, 00:18:21.067 "ana_reporting": false 00:18:21.067 } 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "method": "nvmf_subsystem_add_host", 00:18:21.067 "params": { 00:18:21.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.067 "host": "nqn.2016-06.io.spdk:host1", 00:18:21.067 "psk": "key0" 00:18:21.067 } 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "method": "nvmf_subsystem_add_ns", 00:18:21.067 "params": { 00:18:21.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.067 "namespace": { 00:18:21.067 "nsid": 1, 00:18:21.067 "bdev_name": "malloc0", 00:18:21.067 "nguid": "E454E056FDFB4889B48378613F8E31D8", 00:18:21.067 "uuid": "e454e056-fdfb-4889-b483-78613f8e31d8", 00:18:21.067 "no_auto_visible": false 00:18:21.067 } 00:18:21.067 } 00:18:21.067 }, 00:18:21.067 { 00:18:21.067 "method": "nvmf_subsystem_add_listener", 00:18:21.067 "params": { 00:18:21.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.067 "listen_address": { 00:18:21.067 "trtype": "TCP", 00:18:21.067 "adrfam": "IPv4", 00:18:21.067 "traddr": "10.0.0.2", 00:18:21.067 "trsvcid": "4420" 00:18:21.067 }, 00:18:21.067 "secure_channel": false, 00:18:21.067 "sock_impl": "ssl" 00:18:21.067 } 00:18:21.067 } 00:18:21.067 ] 00:18:21.067 } 00:18:21.067 ] 00:18:21.067 }' 00:18:21.067 22:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:21.324 22:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:21.324 "subsystems": [ 00:18:21.324 { 00:18:21.324 "subsystem": "keyring", 00:18:21.324 "config": [ 00:18:21.324 { 00:18:21.324 "method": "keyring_file_add_key", 00:18:21.324 "params": { 00:18:21.324 "name": "key0", 00:18:21.324 "path": "/tmp/tmp.myCErdnqKM" 00:18:21.324 } 00:18:21.324 } 00:18:21.324 ] 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "subsystem": "iobuf", 00:18:21.324 "config": [ 00:18:21.324 { 00:18:21.324 "method": "iobuf_set_options", 00:18:21.324 "params": { 00:18:21.324 "small_pool_count": 8192, 00:18:21.324 "large_pool_count": 1024, 00:18:21.324 "small_bufsize": 8192, 00:18:21.324 "large_bufsize": 135168 00:18:21.324 } 00:18:21.324 } 00:18:21.324 ] 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "subsystem": "sock", 00:18:21.324 "config": [ 00:18:21.324 { 00:18:21.324 "method": "sock_set_default_impl", 00:18:21.324 "params": { 00:18:21.324 "impl_name": "posix" 00:18:21.324 } 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "method": "sock_impl_set_options", 00:18:21.324 "params": { 00:18:21.324 "impl_name": "ssl", 00:18:21.324 "recv_buf_size": 4096, 00:18:21.324 "send_buf_size": 4096, 00:18:21.324 "enable_recv_pipe": true, 00:18:21.324 "enable_quickack": false, 00:18:21.324 "enable_placement_id": 0, 00:18:21.324 "enable_zerocopy_send_server": true, 00:18:21.324 "enable_zerocopy_send_client": false, 00:18:21.324 "zerocopy_threshold": 0, 00:18:21.324 "tls_version": 0, 00:18:21.324 "enable_ktls": false 00:18:21.324 } 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "method": "sock_impl_set_options", 00:18:21.324 "params": { 00:18:21.324 "impl_name": "posix", 00:18:21.324 "recv_buf_size": 2097152, 00:18:21.324 "send_buf_size": 2097152, 00:18:21.324 "enable_recv_pipe": true, 00:18:21.324 "enable_quickack": false, 00:18:21.324 "enable_placement_id": 0, 00:18:21.324 "enable_zerocopy_send_server": true, 00:18:21.324 "enable_zerocopy_send_client": false, 00:18:21.324 "zerocopy_threshold": 0, 00:18:21.324 "tls_version": 0, 00:18:21.324 "enable_ktls": false 00:18:21.324 } 00:18:21.324 } 00:18:21.324 ] 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "subsystem": "vmd", 00:18:21.324 "config": [] 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "subsystem": "accel", 00:18:21.324 "config": [ 00:18:21.324 { 00:18:21.324 "method": "accel_set_options", 00:18:21.324 "params": { 00:18:21.324 "small_cache_size": 128, 00:18:21.324 "large_cache_size": 16, 00:18:21.324 "task_count": 2048, 00:18:21.324 "sequence_count": 2048, 00:18:21.324 "buf_count": 2048 00:18:21.324 } 00:18:21.324 } 00:18:21.324 ] 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "subsystem": "bdev", 00:18:21.324 "config": [ 00:18:21.324 { 00:18:21.324 "method": "bdev_set_options", 00:18:21.324 "params": { 00:18:21.324 "bdev_io_pool_size": 65535, 00:18:21.324 "bdev_io_cache_size": 256, 00:18:21.324 "bdev_auto_examine": true, 00:18:21.324 "iobuf_small_cache_size": 128, 00:18:21.324 "iobuf_large_cache_size": 16 00:18:21.324 } 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "method": "bdev_raid_set_options", 00:18:21.324 "params": { 00:18:21.324 "process_window_size_kb": 1024 00:18:21.324 } 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "method": "bdev_iscsi_set_options", 00:18:21.324 "params": { 00:18:21.324 "timeout_sec": 30 00:18:21.324 } 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "method": "bdev_nvme_set_options", 00:18:21.324 "params": { 00:18:21.324 "action_on_timeout": "none", 00:18:21.324 "timeout_us": 0, 00:18:21.324 "timeout_admin_us": 0, 00:18:21.324 "keep_alive_timeout_ms": 10000, 00:18:21.324 "arbitration_burst": 0, 00:18:21.324 "low_priority_weight": 0, 00:18:21.324 "medium_priority_weight": 0, 00:18:21.324 "high_priority_weight": 0, 00:18:21.324 "nvme_adminq_poll_period_us": 10000, 00:18:21.324 "nvme_ioq_poll_period_us": 0, 00:18:21.324 "io_queue_requests": 512, 00:18:21.324 "delay_cmd_submit": true, 00:18:21.324 "transport_retry_count": 4, 00:18:21.324 "bdev_retry_count": 3, 00:18:21.324 "transport_ack_timeout": 0, 00:18:21.324 "ctrlr_loss_timeout_sec": 0, 00:18:21.324 "reconnect_delay_sec": 0, 00:18:21.324 "fast_io_fail_timeout_sec": 0, 00:18:21.324 "disable_auto_failback": false, 00:18:21.324 "generate_uuids": false, 00:18:21.324 "transport_tos": 0, 00:18:21.324 "nvme_error_stat": false, 00:18:21.324 "rdma_srq_size": 0, 00:18:21.324 "io_path_stat": false, 00:18:21.324 "allow_accel_sequence": false, 00:18:21.324 "rdma_max_cq_size": 0, 00:18:21.324 "rdma_cm_event_timeout_ms": 0, 00:18:21.324 "dhchap_digests": [ 00:18:21.324 "sha256", 00:18:21.324 "sha384", 00:18:21.324 "sha512" 00:18:21.324 ], 00:18:21.324 "dhchap_dhgroups": [ 00:18:21.324 "null", 00:18:21.324 "ffdhe2048", 00:18:21.324 "ffdhe3072", 00:18:21.324 "ffdhe4096", 00:18:21.324 "ffdhe6144", 00:18:21.324 "ffdhe8192" 00:18:21.324 ] 00:18:21.324 } 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "method": "bdev_nvme_attach_controller", 00:18:21.324 "params": { 00:18:21.325 "name": "nvme0", 00:18:21.325 "trtype": "TCP", 00:18:21.325 "adrfam": "IPv4", 00:18:21.325 "traddr": "10.0.0.2", 00:18:21.325 "trsvcid": "4420", 00:18:21.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.325 "prchk_reftag": false, 00:18:21.325 "prchk_guard": false, 00:18:21.325 "ctrlr_loss_timeout_sec": 0, 00:18:21.325 "reconnect_delay_sec": 0, 00:18:21.325 "fast_io_fail_timeout_sec": 0, 00:18:21.325 "psk": "key0", 00:18:21.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.325 "hdgst": false, 00:18:21.325 "ddgst": false 00:18:21.325 } 00:18:21.325 }, 00:18:21.325 { 00:18:21.325 "method": "bdev_nvme_set_hotplug", 00:18:21.325 "params": { 00:18:21.325 "period_us": 100000, 00:18:21.325 "enable": false 00:18:21.325 } 00:18:21.325 }, 00:18:21.325 { 00:18:21.325 "method": "bdev_enable_histogram", 00:18:21.325 "params": { 00:18:21.325 "name": "nvme0n1", 00:18:21.325 "enable": true 00:18:21.325 } 00:18:21.325 }, 00:18:21.325 { 00:18:21.325 "method": "bdev_wait_for_examine" 00:18:21.325 } 00:18:21.325 ] 00:18:21.325 }, 00:18:21.325 { 00:18:21.325 "subsystem": "nbd", 00:18:21.325 "config": [] 00:18:21.325 } 00:18:21.325 ] 00:18:21.325 }' 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 1284486 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1284486 ']' 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1284486 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1284486 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1284486' 00:18:21.325 killing process with pid 1284486 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1284486 00:18:21.325 Received shutdown signal, test time was about 1.000000 seconds 00:18:21.325 00:18:21.325 Latency(us) 00:18:21.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.325 =================================================================================================================== 00:18:21.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.325 22:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1284486 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 1284393 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1284393 ']' 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1284393 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1284393 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1284393' 00:18:21.581 killing process with pid 1284393 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1284393 00:18:21.581 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1284393 00:18:21.839 22:42:05 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:21.839 22:42:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:21.839 22:42:05 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:21.839 "subsystems": [ 00:18:21.839 { 00:18:21.839 "subsystem": "keyring", 00:18:21.839 "config": [ 00:18:21.839 { 00:18:21.839 "method": "keyring_file_add_key", 00:18:21.839 "params": { 00:18:21.839 "name": "key0", 00:18:21.839 "path": "/tmp/tmp.myCErdnqKM" 00:18:21.839 } 00:18:21.839 } 00:18:21.839 ] 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "subsystem": "iobuf", 00:18:21.839 "config": [ 00:18:21.839 { 00:18:21.839 "method": "iobuf_set_options", 00:18:21.839 "params": { 00:18:21.839 "small_pool_count": 8192, 00:18:21.839 "large_pool_count": 1024, 00:18:21.839 "small_bufsize": 8192, 00:18:21.839 "large_bufsize": 135168 00:18:21.839 } 00:18:21.839 } 00:18:21.839 ] 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "subsystem": "sock", 00:18:21.839 "config": [ 00:18:21.839 { 00:18:21.839 "method": "sock_set_default_impl", 00:18:21.839 "params": { 00:18:21.839 "impl_name": "posix" 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "sock_impl_set_options", 00:18:21.839 "params": { 00:18:21.839 "impl_name": "ssl", 00:18:21.839 "recv_buf_size": 4096, 00:18:21.839 "send_buf_size": 4096, 00:18:21.839 "enable_recv_pipe": true, 00:18:21.839 "enable_quickack": false, 00:18:21.839 "enable_placement_id": 0, 00:18:21.839 "enable_zerocopy_send_server": true, 00:18:21.839 "enable_zerocopy_send_client": false, 00:18:21.839 "zerocopy_threshold": 0, 00:18:21.839 "tls_version": 0, 00:18:21.839 "enable_ktls": false 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "sock_impl_set_options", 00:18:21.839 "params": { 00:18:21.839 "impl_name": "posix", 00:18:21.839 "recv_buf_size": 2097152, 00:18:21.839 "send_buf_size": 2097152, 00:18:21.839 "enable_recv_pipe": true, 00:18:21.839 "enable_quickack": false, 00:18:21.839 "enable_placement_id": 0, 00:18:21.839 "enable_zerocopy_send_server": true, 00:18:21.839 "enable_zerocopy_send_client": false, 00:18:21.839 "zerocopy_threshold": 0, 00:18:21.839 "tls_version": 0, 00:18:21.839 "enable_ktls": false 00:18:21.839 } 00:18:21.839 } 00:18:21.839 ] 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "subsystem": "vmd", 00:18:21.839 "config": [] 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "subsystem": "accel", 00:18:21.839 "config": [ 00:18:21.839 { 00:18:21.839 "method": "accel_set_options", 00:18:21.839 "params": { 00:18:21.839 "small_cache_size": 128, 00:18:21.839 "large_cache_size": 16, 00:18:21.839 "task_count": 2048, 00:18:21.839 "sequence_count": 2048, 00:18:21.839 "buf_count": 2048 00:18:21.839 } 00:18:21.839 } 00:18:21.839 ] 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "subsystem": "bdev", 00:18:21.839 "config": [ 00:18:21.839 { 00:18:21.839 "method": "bdev_set_options", 00:18:21.839 "params": { 00:18:21.839 "bdev_io_pool_size": 65535, 00:18:21.839 "bdev_io_cache_size": 256, 00:18:21.839 "bdev_auto_examine": true, 00:18:21.839 "iobuf_small_cache_size": 128, 00:18:21.839 "iobuf_large_cache_size": 16 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "bdev_raid_set_options", 00:18:21.839 "params": { 00:18:21.839 "process_window_size_kb": 1024 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "bdev_iscsi_set_options", 00:18:21.839 "params": { 00:18:21.839 "timeout_sec": 30 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "bdev_nvme_set_options", 00:18:21.839 "params": { 00:18:21.839 "action_on_timeout": "none", 00:18:21.839 "timeout_us": 0, 00:18:21.839 "timeout_admin_us": 0, 00:18:21.839 "keep_alive_timeout_ms": 10000, 00:18:21.839 "arbitration_burst": 0, 00:18:21.839 "low_priority_weight": 0, 00:18:21.839 "medium_priority_weight": 0, 00:18:21.839 "high_priority_weight": 0, 00:18:21.839 "nvme_adminq_poll_period_us": 10000, 00:18:21.839 "nvme_ioq_poll_period_us": 0, 00:18:21.839 "io_queue_requests": 0, 00:18:21.839 "delay_cmd_submit": true, 00:18:21.839 "transport_retry_count": 4, 00:18:21.839 "bdev_retry_count": 3, 00:18:21.839 "transport_ack_timeout": 0, 00:18:21.839 "ctrlr_loss_timeout_sec": 0, 00:18:21.839 "reconnect_delay_sec": 0, 00:18:21.839 "fast_io_fail_timeout_sec": 0, 00:18:21.839 "disable_auto_failback": false, 00:18:21.839 "generate_uuids": false, 00:18:21.839 "transport_tos": 0, 00:18:21.839 "nvme_error_stat": false, 00:18:21.839 "rdma_srq_size": 0, 00:18:21.839 "io_path_stat": false, 00:18:21.839 "allow_accel_sequence": false, 00:18:21.839 "rdma_max_cq_size": 0, 00:18:21.839 "rdma_cm_event_timeout_ms": 0, 00:18:21.839 "dhchap_digests": [ 00:18:21.839 "sha256", 00:18:21.839 "sha384", 00:18:21.839 "sha512" 00:18:21.839 ], 00:18:21.839 "dhchap_dhgroups": [ 00:18:21.839 "null", 00:18:21.839 "ffdhe2048", 00:18:21.839 "ffdhe3072", 00:18:21.839 "ffdhe4096", 00:18:21.839 "ffdhe6144", 00:18:21.839 "ffdhe8192" 00:18:21.839 ] 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "bdev_nvme_set_hotplug", 00:18:21.839 "params": { 00:18:21.839 "period_us": 100000, 00:18:21.839 "enable": false 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "bdev_malloc_create", 00:18:21.839 "params": { 00:18:21.839 "name": "malloc0", 00:18:21.839 "num_blocks": 8192, 00:18:21.839 "block_size": 4096, 00:18:21.839 "physical_block_size": 4096, 00:18:21.839 "uuid": "e454e056-fdfb-4889-b483-78613f8e31d8", 00:18:21.839 "optimal_io_boundary": 0 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "bdev_wait_for_examine" 00:18:21.839 } 00:18:21.839 ] 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "subsystem": "nbd", 00:18:21.839 "config": [] 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "subsystem": "scheduler", 00:18:21.839 "config": [ 00:18:21.839 { 00:18:21.839 "method": "framework_set_scheduler", 00:18:21.839 "params": { 00:18:21.839 "name": "static" 00:18:21.839 } 00:18:21.839 } 00:18:21.839 ] 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "subsystem": "nvmf", 00:18:21.839 "config": [ 00:18:21.839 { 00:18:21.839 "method": "nvmf_set_config", 00:18:21.839 "params": { 00:18:21.839 "discovery_filter": "match_any", 00:18:21.839 "admin_cmd_passthru": { 00:18:21.839 "identify_ctrlr": false 00:18:21.839 } 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "nvmf_set_max_subsystems", 00:18:21.839 "params": { 00:18:21.839 "max_subsystems": 1024 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "nvmf_set_crdt", 00:18:21.839 "params": { 00:18:21.839 "crdt1": 0, 00:18:21.839 "crdt2": 0, 00:18:21.839 "crdt3": 0 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "nvmf_create_transport", 00:18:21.839 "params": { 00:18:21.839 "trtype": "TCP", 00:18:21.839 "max_queue_depth": 128, 00:18:21.839 "max_io_qpairs_per_ctrlr": 127, 00:18:21.839 "in_capsule_data_size": 4096, 00:18:21.839 "max_io_size": 131072, 00:18:21.839 "io_unit_size": 131072, 00:18:21.839 "max_aq_depth": 128, 00:18:21.839 "num_shared_buffers": 511, 00:18:21.839 "buf_cache_size": 4294967295, 00:18:21.839 "dif_insert_or_strip": false, 00:18:21.839 "zcopy": false, 00:18:21.839 "c2h_success": false, 00:18:21.839 "sock_priority": 0, 00:18:21.839 "abort_timeout_sec": 1, 00:18:21.839 "ack_timeout": 0, 00:18:21.839 "data_wr_pool_size": 0 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.839 "method": "nvmf_create_subsystem", 00:18:21.839 "params": { 00:18:21.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.839 "allow_any_host": false, 00:18:21.839 "serial_number": "00000000000000000000", 00:18:21.839 "model_number": "SPDK bdev Controller", 00:18:21.839 "max_namespaces": 32, 00:18:21.839 "min_cntlid": 1, 00:18:21.839 "max_cntlid": 65519, 00:18:21.839 "ana_reporting": false 00:18:21.839 } 00:18:21.839 }, 00:18:21.839 { 00:18:21.840 "method": "nvmf_subsystem_add_host", 00:18:21.840 "params": { 00:18:21.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.840 "host": "nqn.2016-06.io.spdk:host1", 00:18:21.840 "psk": "key0" 00:18:21.840 } 00:18:21.840 }, 00:18:21.840 { 00:18:21.840 "method": "nvmf_subsystem_add_ns", 00:18:21.840 "params": { 00:18:21.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.840 "namespace": { 00:18:21.840 "nsid": 1, 00:18:21.840 "bdev_name": "malloc0", 00:18:21.840 "nguid": "E454E056FDFB4889B48378613F8E31D8", 00:18:21.840 "uuid": "e454e056-fdfb-4889-b483-78613f8e31d8", 00:18:21.840 "no_auto_visible": false 00:18:21.840 } 00:18:21.840 } 00:18:21.840 }, 00:18:21.840 { 00:18:21.840 "method": "nvmf_subsystem_add_listener", 00:18:21.840 "params": { 00:18:21.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.840 "listen_address": { 00:18:21.840 "trtype": "TCP", 00:18:21.840 "adrfam": "IPv4", 00:18:21.840 "traddr": "10.0.0.2", 00:18:21.840 "trsvcid": "4420" 00:18:21.840 }, 00:18:21.840 "secure_channel": false, 00:18:21.840 "sock_impl": "ssl" 00:18:21.840 } 00:18:21.840 } 00:18:21.840 ] 00:18:21.840 } 00:18:21.840 ] 00:18:21.840 }' 00:18:21.840 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:21.840 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.098 22:42:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1284896 00:18:22.098 22:42:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:22.098 22:42:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1284896 00:18:22.098 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1284896 ']' 00:18:22.098 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.098 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:22.098 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.098 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:22.098 22:42:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.098 [2024-07-15 22:42:05.389710] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:22.098 [2024-07-15 22:42:05.389791] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.098 [2024-07-15 22:42:05.453351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.098 [2024-07-15 22:42:05.559091] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.098 [2024-07-15 22:42:05.559144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.098 [2024-07-15 22:42:05.559173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.098 [2024-07-15 22:42:05.559185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.098 [2024-07-15 22:42:05.559195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.098 [2024-07-15 22:42:05.559270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.356 [2024-07-15 22:42:05.803328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.356 [2024-07-15 22:42:05.835351] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.356 [2024-07-15 22:42:05.843054] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1285048 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1285048 /var/tmp/bdevperf.sock 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1285048 ']' 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:22.920 22:42:06 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:22.920 "subsystems": [ 00:18:22.920 { 00:18:22.920 "subsystem": "keyring", 00:18:22.920 "config": [ 00:18:22.920 { 00:18:22.920 "method": "keyring_file_add_key", 00:18:22.920 "params": { 00:18:22.920 "name": "key0", 00:18:22.920 "path": "/tmp/tmp.myCErdnqKM" 00:18:22.920 } 00:18:22.920 } 00:18:22.920 ] 00:18:22.920 }, 00:18:22.920 { 00:18:22.920 "subsystem": "iobuf", 00:18:22.920 "config": [ 00:18:22.920 { 00:18:22.920 "method": "iobuf_set_options", 00:18:22.920 "params": { 00:18:22.920 "small_pool_count": 8192, 00:18:22.920 "large_pool_count": 1024, 00:18:22.920 "small_bufsize": 8192, 00:18:22.920 "large_bufsize": 135168 00:18:22.920 } 00:18:22.920 } 00:18:22.920 ] 00:18:22.920 }, 00:18:22.920 { 00:18:22.920 "subsystem": "sock", 00:18:22.920 "config": [ 00:18:22.920 { 00:18:22.920 "method": "sock_set_default_impl", 00:18:22.920 "params": { 00:18:22.920 "impl_name": "posix" 00:18:22.920 } 00:18:22.920 }, 00:18:22.920 { 00:18:22.920 "method": "sock_impl_set_options", 00:18:22.920 "params": { 00:18:22.920 "impl_name": "ssl", 00:18:22.920 "recv_buf_size": 4096, 00:18:22.920 "send_buf_size": 4096, 00:18:22.920 "enable_recv_pipe": true, 00:18:22.920 "enable_quickack": false, 00:18:22.920 "enable_placement_id": 0, 00:18:22.920 "enable_zerocopy_send_server": true, 00:18:22.920 "enable_zerocopy_send_client": false, 00:18:22.920 "zerocopy_threshold": 0, 00:18:22.920 "tls_version": 0, 00:18:22.920 "enable_ktls": false 00:18:22.920 } 00:18:22.920 }, 00:18:22.920 { 00:18:22.920 "method": "sock_impl_set_options", 00:18:22.920 "params": { 00:18:22.920 "impl_name": "posix", 00:18:22.920 "recv_buf_size": 2097152, 00:18:22.920 "send_buf_size": 2097152, 00:18:22.920 "enable_recv_pipe": true, 00:18:22.920 "enable_quickack": false, 00:18:22.920 "enable_placement_id": 0, 00:18:22.921 "enable_zerocopy_send_server": true, 00:18:22.921 "enable_zerocopy_send_client": false, 00:18:22.921 "zerocopy_threshold": 0, 00:18:22.921 "tls_version": 0, 00:18:22.921 "enable_ktls": false 00:18:22.921 } 00:18:22.921 } 00:18:22.921 ] 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "subsystem": "vmd", 00:18:22.921 "config": [] 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "subsystem": "accel", 00:18:22.921 "config": [ 00:18:22.921 { 00:18:22.921 "method": "accel_set_options", 00:18:22.921 "params": { 00:18:22.921 "small_cache_size": 128, 00:18:22.921 "large_cache_size": 16, 00:18:22.921 "task_count": 2048, 00:18:22.921 "sequence_count": 2048, 00:18:22.921 "buf_count": 2048 00:18:22.921 } 00:18:22.921 } 00:18:22.921 ] 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "subsystem": "bdev", 00:18:22.921 "config": [ 00:18:22.921 { 00:18:22.921 "method": "bdev_set_options", 00:18:22.921 "params": { 00:18:22.921 "bdev_io_pool_size": 65535, 00:18:22.921 "bdev_io_cache_size": 256, 00:18:22.921 "bdev_auto_examine": true, 00:18:22.921 "iobuf_small_cache_size": 128, 00:18:22.921 "iobuf_large_cache_size": 16 00:18:22.921 } 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "method": "bdev_raid_set_options", 00:18:22.921 "params": { 00:18:22.921 "process_window_size_kb": 1024 00:18:22.921 } 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "method": "bdev_iscsi_set_options", 00:18:22.921 "params": { 00:18:22.921 "timeout_sec": 30 00:18:22.921 } 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "method": "bdev_nvme_set_options", 00:18:22.921 "params": { 00:18:22.921 "action_on_timeout": "none", 00:18:22.921 "timeout_us": 0, 00:18:22.921 "timeout_admin_us": 0, 00:18:22.921 "keep_alive_timeout_ms": 10000, 00:18:22.921 "arbitration_burst": 0, 00:18:22.921 "low_priority_weight": 0, 00:18:22.921 "medium_priority_weight": 0, 00:18:22.921 "high_priority_weight": 0, 00:18:22.921 "nvme_adminq_poll_period_us": 10000, 00:18:22.921 "nvme_ioq_poll_period_us": 0, 00:18:22.921 "io_queue_requests": 512, 00:18:22.921 "delay_cmd_submit": true, 00:18:22.921 "transport_retry_count": 4, 00:18:22.921 "bdev_retry_count": 3, 00:18:22.921 "transport_ack_timeout": 0, 00:18:22.921 "ctrlr_loss_timeout_sec": 0, 00:18:22.921 "reconnect_delay_sec": 0, 00:18:22.921 "fast_io_fail_timeout_sec": 0, 00:18:22.921 "disable_auto_failback": false, 00:18:22.921 "generate_uuids": false, 00:18:22.921 "transport_tos": 0, 00:18:22.921 "nvme_error_stat": false, 00:18:22.921 "rdma_srq_size": 0, 00:18:22.921 "io_path_stat": false, 00:18:22.921 "allow_accel_sequence": false, 00:18:22.921 "rdma_max_cq_size": 0, 00:18:22.921 "rdma_cm_event_timeout_ms": 0, 00:18:22.921 "dhchap_digests": [ 00:18:22.921 "sha256", 00:18:22.921 "sha384", 00:18:22.921 "sha512" 00:18:22.921 ], 00:18:22.921 "dhchap_dhgroups": [ 00:18:22.921 "null", 00:18:22.921 "ffdhe2048", 00:18:22.921 "ffdhe3072", 00:18:22.921 "ffdhe4096", 00:18:22.921 "ffdhe6144", 00:18:22.921 "ffdhe8192" 00:18:22.921 ] 00:18:22.921 } 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "method": "bdev_nvme_attach_controller", 00:18:22.921 "params": { 00:18:22.921 "name": "nvme0", 00:18:22.921 "trtype": "TCP", 00:18:22.921 "adrfam": "IPv4", 00:18:22.921 "traddr": "10.0.0.2", 00:18:22.921 "trsvcid": "4420", 00:18:22.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.921 "prchk_reftag": false, 00:18:22.921 "prchk_guard": false, 00:18:22.921 "ctrlr_loss_timeout_sec": 0, 00:18:22.921 "reconnect_delay_sec": 0, 00:18:22.921 "fast_io_fail_timeout_sec": 0, 00:18:22.921 "psk": "key0", 00:18:22.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.921 "hdgst": false, 00:18:22.921 "ddgst": false 00:18:22.921 } 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "method": "bdev_nvme_set_hotplug", 00:18:22.921 "params": { 00:18:22.921 "period_us": 100000, 00:18:22.921 "enable": false 00:18:22.921 } 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "method": "bdev_enable_histogram", 00:18:22.921 "params": { 00:18:22.921 "name": "nvme0n1", 00:18:22.921 "enable": true 00:18:22.921 } 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "method": "bdev_wait_for_examine" 00:18:22.921 } 00:18:22.921 ] 00:18:22.921 }, 00:18:22.921 { 00:18:22.921 "subsystem": "nbd", 00:18:22.921 "config": [] 00:18:22.921 } 00:18:22.921 ] 00:18:22.921 }' 00:18:22.921 22:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.921 22:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:22.921 22:42:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.921 [2024-07-15 22:42:06.411554] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:22.921 [2024-07-15 22:42:06.411632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285048 ] 00:18:23.180 [2024-07-15 22:42:06.473224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.180 [2024-07-15 22:42:06.588696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.437 [2024-07-15 22:42:06.774815] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.002 22:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:24.002 22:42:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:24.002 22:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:24.002 22:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:24.260 22:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.260 22:42:07 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:24.260 Running I/O for 1 seconds... 00:18:25.631 00:18:25.631 Latency(us) 00:18:25.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.631 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:25.631 Verification LBA range: start 0x0 length 0x2000 00:18:25.631 nvme0n1 : 1.06 1700.12 6.64 0.00 0.00 73485.93 6699.24 106411.05 00:18:25.631 =================================================================================================================== 00:18:25.631 Total : 1700.12 6.64 0.00 0.00 73485.93 6699.24 106411.05 00:18:25.631 0 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@800 -- # type=--id 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@801 -- # id=0 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # for n in $shm_files 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:25.631 nvmf_trace.0 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # return 0 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1285048 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1285048 ']' 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1285048 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1285048 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1285048' 00:18:25.631 killing process with pid 1285048 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1285048 00:18:25.631 Received shutdown signal, test time was about 1.000000 seconds 00:18:25.631 00:18:25.631 Latency(us) 00:18:25.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.631 =================================================================================================================== 00:18:25.631 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.631 22:42:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1285048 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.887 rmmod nvme_tcp 00:18:25.887 rmmod nvme_fabrics 00:18:25.887 rmmod nvme_keyring 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1284896 ']' 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1284896 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1284896 ']' 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1284896 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1284896 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1284896' 00:18:25.887 killing process with pid 1284896 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1284896 00:18:25.887 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1284896 00:18:26.144 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:26.144 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:26.144 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:26.144 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.144 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.144 22:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.144 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.144 22:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.048 22:42:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:28.048 22:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.cbtL5fUbhG /tmp/tmp.8TGxuJYtkm /tmp/tmp.myCErdnqKM 00:18:28.048 00:18:28.048 real 1m21.422s 00:18:28.048 user 2m8.000s 00:18:28.048 sys 0m28.903s 00:18:28.048 22:42:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:28.048 22:42:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.048 ************************************ 00:18:28.048 END TEST nvmf_tls 00:18:28.048 ************************************ 00:18:28.308 22:42:11 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:18:28.308 22:42:11 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:28.308 22:42:11 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:18:28.308 22:42:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:28.308 22:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:28.308 ************************************ 00:18:28.308 START TEST nvmf_fips 00:18:28.308 ************************************ 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:28.308 * Looking for test storage... 00:18:28.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:28.308 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # local es=0 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@644 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@630 -- # local arg=openssl 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # type -t openssl 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # type -P openssl 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # arg=/usr/bin/openssl 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # [[ -x /usr/bin/openssl ]] 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@645 -- # openssl md5 /dev/fd/62 00:18:28.309 Error setting digest 00:18:28.309 00B2FDDF1B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:28.309 00B2FDDF1B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@645 -- # es=1 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.309 22:42:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:30.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:30.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:30.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:30.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:30.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:18:30.848 00:18:30.848 --- 10.0.0.2 ping statistics --- 00:18:30.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.848 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:30.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:18:30.848 00:18:30.848 --- 10.0.0.1 ping statistics --- 00:18:30.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.848 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1287797 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1287797 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@823 -- # '[' -z 1287797 ']' 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.848 22:42:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:30.849 22:42:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.849 22:42:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:30.849 22:42:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:30.849 [2024-07-15 22:42:13.981909] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:30.849 [2024-07-15 22:42:13.982000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.849 [2024-07-15 22:42:14.042996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.849 [2024-07-15 22:42:14.155347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.849 [2024-07-15 22:42:14.155392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.849 [2024-07-15 22:42:14.155420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.849 [2024-07-15 22:42:14.155431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.849 [2024-07-15 22:42:14.155440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.849 [2024-07-15 22:42:14.155466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.845 22:42:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:31.845 22:42:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # return 0 00:18:31.845 22:42:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.845 22:42:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.845 22:42:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.846 22:42:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.846 22:42:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:31.846 22:42:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:31.846 22:42:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:31.846 22:42:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:31.846 22:42:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:31.846 22:42:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:31.846 22:42:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:31.846 22:42:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.846 [2024-07-15 22:42:15.248255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.846 [2024-07-15 22:42:15.264231] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.846 [2024-07-15 22:42:15.264435] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.846 [2024-07-15 22:42:15.296181] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:31.846 malloc0 00:18:31.846 22:42:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:31.846 22:42:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1288030 00:18:31.846 22:42:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.846 22:42:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1288030 /var/tmp/bdevperf.sock 00:18:31.846 22:42:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@823 -- # '[' -z 1288030 ']' 00:18:31.846 22:42:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.846 22:42:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:31.846 22:42:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.846 22:42:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:31.846 22:42:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:32.105 [2024-07-15 22:42:15.388543] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:32.105 [2024-07-15 22:42:15.388623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288030 ] 00:18:32.105 [2024-07-15 22:42:15.446630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.105 [2024-07-15 22:42:15.551159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.041 22:42:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:33.041 22:42:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # return 0 00:18:33.041 22:42:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:33.299 [2024-07-15 22:42:16.603809] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.299 [2024-07-15 22:42:16.603954] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:33.299 TLSTESTn1 00:18:33.299 22:42:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:33.557 Running I/O for 10 seconds... 00:18:43.522 00:18:43.522 Latency(us) 00:18:43.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.522 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:43.522 Verification LBA range: start 0x0 length 0x2000 00:18:43.522 TLSTESTn1 : 10.06 1897.74 7.41 0.00 0.00 67252.46 6140.97 97478.73 00:18:43.522 =================================================================================================================== 00:18:43.522 Total : 1897.74 7.41 0.00 0.00 67252.46 6140.97 97478.73 00:18:43.522 0 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@800 -- # type=--id 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@801 -- # id=0 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # for n in $shm_files 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:43.522 nvmf_trace.0 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # return 0 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1288030 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@942 -- # '[' -z 1288030 ']' 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # kill -0 1288030 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # uname 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:43.522 22:42:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1288030 00:18:43.522 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:43.522 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:43.522 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1288030' 00:18:43.522 killing process with pid 1288030 00:18:43.522 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@961 -- # kill 1288030 00:18:43.522 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.522 00:18:43.522 Latency(us) 00:18:43.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.522 =================================================================================================================== 00:18:43.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.522 [2024-07-15 22:42:27.003296] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:43.522 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # wait 1288030 00:18:43.778 22:42:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:43.778 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.778 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:43.778 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.778 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:43.778 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.778 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.778 rmmod nvme_tcp 00:18:44.035 rmmod nvme_fabrics 00:18:44.035 rmmod nvme_keyring 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1287797 ']' 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1287797 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@942 -- # '[' -z 1287797 ']' 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # kill -0 1287797 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # uname 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1287797 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1287797' 00:18:44.035 killing process with pid 1287797 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@961 -- # kill 1287797 00:18:44.035 [2024-07-15 22:42:27.369088] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:44.035 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # wait 1287797 00:18:44.292 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.292 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.292 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.292 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.292 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.292 22:42:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.292 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.292 22:42:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.817 22:42:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.817 22:42:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:46.817 00:18:46.817 real 0m18.133s 00:18:46.817 user 0m21.725s 00:18:46.817 sys 0m6.858s 00:18:46.817 22:42:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:46.817 22:42:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.817 ************************************ 00:18:46.817 END TEST nvmf_fips 00:18:46.817 ************************************ 00:18:46.817 22:42:29 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:18:46.817 22:42:29 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:46.817 22:42:29 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:46.817 22:42:29 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:46.817 22:42:29 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:46.817 22:42:29 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.817 22:42:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:48.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:48.720 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:48.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:48.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:48.720 22:42:31 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:48.720 22:42:31 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:18:48.720 22:42:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:48.720 22:42:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.720 ************************************ 00:18:48.720 START TEST nvmf_perf_adq 00:18:48.720 ************************************ 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:48.720 * Looking for test storage... 00:18:48.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.720 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.721 22:42:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:50.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:50.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:50.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.623 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:50.624 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:50.624 22:42:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:51.190 22:42:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:53.085 22:42:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:58.349 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:58.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:58.349 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:58.349 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:58.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:18:58.349 00:18:58.349 --- 10.0.0.2 ping statistics --- 00:18:58.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.349 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:18:58.349 00:18:58.349 --- 10.0.0.1 ping statistics --- 00:18:58.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.349 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:18:58.349 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1293931 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1293931 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@823 -- # '[' -z 1293931 ']' 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:58.350 22:42:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.350 [2024-07-15 22:42:41.641235] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:18:58.350 [2024-07-15 22:42:41.641336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.350 [2024-07-15 22:42:41.711994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.350 [2024-07-15 22:42:41.832581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.350 [2024-07-15 22:42:41.832645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.350 [2024-07-15 22:42:41.832662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.350 [2024-07-15 22:42:41.832676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.350 [2024-07-15 22:42:41.832688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.350 [2024-07-15 22:42:41.832772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.350 [2024-07-15 22:42:41.832826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.350 [2024-07-15 22:42:41.836898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.350 [2024-07-15 22:42:41.836910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # return 0 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.327 [2024-07-15 22:42:42.809965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.327 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.585 Malloc1 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.585 [2024-07-15 22:42:42.864501] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1294111 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:59.585 22:42:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:01.486 22:42:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:01.486 22:42:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:01.486 22:42:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.486 22:42:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.486 22:42:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:01.486 "tick_rate": 2700000000, 00:19:01.486 "poll_groups": [ 00:19:01.486 { 00:19:01.486 "name": "nvmf_tgt_poll_group_000", 00:19:01.486 "admin_qpairs": 1, 00:19:01.486 "io_qpairs": 1, 00:19:01.486 "current_admin_qpairs": 1, 00:19:01.486 "current_io_qpairs": 1, 00:19:01.486 "pending_bdev_io": 0, 00:19:01.486 "completed_nvme_io": 18986, 00:19:01.486 "transports": [ 00:19:01.486 { 00:19:01.486 "trtype": "TCP" 00:19:01.486 } 00:19:01.486 ] 00:19:01.486 }, 00:19:01.486 { 00:19:01.486 "name": "nvmf_tgt_poll_group_001", 00:19:01.486 "admin_qpairs": 0, 00:19:01.486 "io_qpairs": 1, 00:19:01.486 "current_admin_qpairs": 0, 00:19:01.486 "current_io_qpairs": 1, 00:19:01.486 "pending_bdev_io": 0, 00:19:01.486 "completed_nvme_io": 19231, 00:19:01.486 "transports": [ 00:19:01.486 { 00:19:01.486 "trtype": "TCP" 00:19:01.486 } 00:19:01.486 ] 00:19:01.486 }, 00:19:01.486 { 00:19:01.486 "name": "nvmf_tgt_poll_group_002", 00:19:01.486 "admin_qpairs": 0, 00:19:01.486 "io_qpairs": 1, 00:19:01.486 "current_admin_qpairs": 0, 00:19:01.486 "current_io_qpairs": 1, 00:19:01.486 "pending_bdev_io": 0, 00:19:01.486 "completed_nvme_io": 19321, 00:19:01.486 "transports": [ 00:19:01.486 { 00:19:01.486 "trtype": "TCP" 00:19:01.486 } 00:19:01.486 ] 00:19:01.486 }, 00:19:01.486 { 00:19:01.486 "name": "nvmf_tgt_poll_group_003", 00:19:01.486 "admin_qpairs": 0, 00:19:01.486 "io_qpairs": 1, 00:19:01.486 "current_admin_qpairs": 0, 00:19:01.486 "current_io_qpairs": 1, 00:19:01.486 "pending_bdev_io": 0, 00:19:01.486 "completed_nvme_io": 18545, 00:19:01.486 "transports": [ 00:19:01.486 { 00:19:01.486 "trtype": "TCP" 00:19:01.486 } 00:19:01.486 ] 00:19:01.486 } 00:19:01.486 ] 00:19:01.486 }' 00:19:01.486 22:42:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:01.486 22:42:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:01.486 22:42:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:01.486 22:42:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:01.486 22:42:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1294111 00:19:11.473 Initializing NVMe Controllers 00:19:11.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:11.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:11.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:11.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:11.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:11.473 Initialization complete. Launching workers. 00:19:11.473 ======================================================== 00:19:11.473 Latency(us) 00:19:11.474 Device Information : IOPS MiB/s Average min max 00:19:11.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10520.50 41.10 6084.09 1756.05 8175.37 00:19:11.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10848.30 42.38 5899.79 1930.54 8291.89 00:19:11.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11004.60 42.99 5816.11 1951.12 7872.91 00:19:11.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10727.40 41.90 5965.36 2679.87 7899.09 00:19:11.474 ======================================================== 00:19:11.474 Total : 43100.79 168.36 5939.73 1756.05 8291.89 00:19:11.474 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.474 rmmod nvme_tcp 00:19:11.474 rmmod nvme_fabrics 00:19:11.474 rmmod nvme_keyring 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1293931 ']' 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1293931 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@942 -- # '[' -z 1293931 ']' 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # kill -0 1293931 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # uname 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1293931 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1293931' 00:19:11.474 killing process with pid 1293931 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@961 -- # kill 1293931 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # wait 1293931 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.474 22:42:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.407 22:42:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.407 22:42:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:12.407 22:42:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:12.666 22:42:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:15.200 22:42:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:20.477 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:20.477 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.477 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.477 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.477 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.477 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:20.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:20.478 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:20.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:20.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:20.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:19:20.478 00:19:20.478 --- 10.0.0.2 ping statistics --- 00:19:20.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.478 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:19:20.478 00:19:20.478 --- 10.0.0.1 ping statistics --- 00:19:20.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.478 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:20.478 net.core.busy_poll = 1 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:20.478 net.core.busy_read = 1 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:20.478 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1296726 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1296726 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@823 -- # '[' -z 1296726 ']' 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 [2024-07-15 22:43:03.508144] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:19:20.479 [2024-07-15 22:43:03.508242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.479 [2024-07-15 22:43:03.576785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.479 [2024-07-15 22:43:03.686862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.479 [2024-07-15 22:43:03.686940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.479 [2024-07-15 22:43:03.686954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.479 [2024-07-15 22:43:03.686965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.479 [2024-07-15 22:43:03.686974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.479 [2024-07-15 22:43:03.687024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.479 [2024-07-15 22:43:03.687047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.479 [2024-07-15 22:43:03.687106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.479 [2024-07-15 22:43:03.687109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # return 0 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 [2024-07-15 22:43:03.904745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 Malloc1 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 [2024-07-15 22:43:03.957946] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1296762 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:20.479 22:43:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:23.006 22:43:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:23.006 22:43:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:23.006 22:43:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.006 22:43:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:23.006 22:43:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:23.006 "tick_rate": 2700000000, 00:19:23.006 "poll_groups": [ 00:19:23.006 { 00:19:23.006 "name": "nvmf_tgt_poll_group_000", 00:19:23.006 "admin_qpairs": 1, 00:19:23.006 "io_qpairs": 1, 00:19:23.006 "current_admin_qpairs": 1, 00:19:23.006 "current_io_qpairs": 1, 00:19:23.006 "pending_bdev_io": 0, 00:19:23.006 "completed_nvme_io": 25167, 00:19:23.006 "transports": [ 00:19:23.006 { 00:19:23.006 "trtype": "TCP" 00:19:23.006 } 00:19:23.006 ] 00:19:23.006 }, 00:19:23.006 { 00:19:23.006 "name": "nvmf_tgt_poll_group_001", 00:19:23.006 "admin_qpairs": 0, 00:19:23.006 "io_qpairs": 3, 00:19:23.006 "current_admin_qpairs": 0, 00:19:23.006 "current_io_qpairs": 3, 00:19:23.006 "pending_bdev_io": 0, 00:19:23.006 "completed_nvme_io": 26978, 00:19:23.006 "transports": [ 00:19:23.006 { 00:19:23.006 "trtype": "TCP" 00:19:23.006 } 00:19:23.006 ] 00:19:23.006 }, 00:19:23.006 { 00:19:23.006 "name": "nvmf_tgt_poll_group_002", 00:19:23.006 "admin_qpairs": 0, 00:19:23.006 "io_qpairs": 0, 00:19:23.006 "current_admin_qpairs": 0, 00:19:23.006 "current_io_qpairs": 0, 00:19:23.006 "pending_bdev_io": 0, 00:19:23.006 "completed_nvme_io": 0, 00:19:23.006 "transports": [ 00:19:23.006 { 00:19:23.006 "trtype": "TCP" 00:19:23.006 } 00:19:23.006 ] 00:19:23.006 }, 00:19:23.006 { 00:19:23.006 "name": "nvmf_tgt_poll_group_003", 00:19:23.006 "admin_qpairs": 0, 00:19:23.006 "io_qpairs": 0, 00:19:23.006 "current_admin_qpairs": 0, 00:19:23.006 "current_io_qpairs": 0, 00:19:23.006 "pending_bdev_io": 0, 00:19:23.006 "completed_nvme_io": 0, 00:19:23.006 "transports": [ 00:19:23.006 { 00:19:23.006 "trtype": "TCP" 00:19:23.006 } 00:19:23.006 ] 00:19:23.006 } 00:19:23.006 ] 00:19:23.006 }' 00:19:23.006 22:43:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:23.006 22:43:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:23.006 22:43:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:23.006 22:43:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:23.006 22:43:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1296762 00:19:31.407 Initializing NVMe Controllers 00:19:31.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:31.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:31.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:31.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:31.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:31.407 Initialization complete. Launching workers. 00:19:31.407 ======================================================== 00:19:31.407 Latency(us) 00:19:31.407 Device Information : IOPS MiB/s Average min max 00:19:31.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13281.56 51.88 4819.46 1617.65 7315.91 00:19:31.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4660.69 18.21 13735.55 2090.97 60160.28 00:19:31.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4617.29 18.04 13863.05 2054.24 60061.85 00:19:31.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4875.49 19.04 13130.62 2090.12 63125.36 00:19:31.407 ======================================================== 00:19:31.407 Total : 27435.03 107.17 9333.14 1617.65 63125.36 00:19:31.407 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:31.407 rmmod nvme_tcp 00:19:31.407 rmmod nvme_fabrics 00:19:31.407 rmmod nvme_keyring 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1296726 ']' 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1296726 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@942 -- # '[' -z 1296726 ']' 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # kill -0 1296726 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # uname 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1296726 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1296726' 00:19:31.407 killing process with pid 1296726 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@961 -- # kill 1296726 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # wait 1296726 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.407 22:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.695 22:43:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:34.695 22:43:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:34.695 00:19:34.695 real 0m45.805s 00:19:34.695 user 2m43.161s 00:19:34.695 sys 0m9.710s 00:19:34.695 22:43:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:34.695 22:43:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:34.695 ************************************ 00:19:34.695 END TEST nvmf_perf_adq 00:19:34.695 ************************************ 00:19:34.695 22:43:17 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:19:34.695 22:43:17 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:34.695 22:43:17 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:19:34.695 22:43:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:34.695 22:43:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.695 ************************************ 00:19:34.695 START TEST nvmf_shutdown 00:19:34.695 ************************************ 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:34.695 * Looking for test storage... 00:19:34.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:34.695 ************************************ 00:19:34.695 START TEST nvmf_shutdown_tc1 00:19:34.695 ************************************ 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc1 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:34.695 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.696 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:34.696 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:34.696 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:34.696 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.696 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.696 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.696 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:34.696 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:34.696 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:34.696 22:43:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:19:36.599 00:19:36.599 --- 10.0.0.2 ping statistics --- 00:19:36.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.599 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:19:36.599 00:19:36.599 --- 10.0.0.1 ping statistics --- 00:19:36.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.599 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.599 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1300049 00:19:36.600 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:36.600 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1300049 00:19:36.600 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@823 -- # '[' -z 1300049 ']' 00:19:36.600 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.600 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:36.600 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.600 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:36.600 22:43:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.600 [2024-07-15 22:43:19.909050] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:19:36.600 [2024-07-15 22:43:19.909133] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.600 [2024-07-15 22:43:19.972324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:36.600 [2024-07-15 22:43:20.086250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.600 [2024-07-15 22:43:20.086303] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.600 [2024-07-15 22:43:20.086331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.600 [2024-07-15 22:43:20.086342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.600 [2024-07-15 22:43:20.086352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.600 [2024-07-15 22:43:20.086422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.600 [2024-07-15 22:43:20.086452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.600 [2024-07-15 22:43:20.086512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:36.600 [2024-07-15 22:43:20.086515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # return 0 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.534 [2024-07-15 22:43:20.915923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:37.534 22:43:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:37.534 Malloc1 00:19:37.534 [2024-07-15 22:43:20.995166] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.534 Malloc2 00:19:37.792 Malloc3 00:19:37.792 Malloc4 00:19:37.792 Malloc5 00:19:37.792 Malloc6 00:19:37.792 Malloc7 00:19:38.051 Malloc8 00:19:38.051 Malloc9 00:19:38.051 Malloc10 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1300354 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1300354 /var/tmp/bdevperf.sock 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@823 -- # '[' -z 1300354 ']' 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.051 { 00:19:38.051 "params": { 00:19:38.051 "name": "Nvme$subsystem", 00:19:38.051 "trtype": "$TEST_TRANSPORT", 00:19:38.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.051 "adrfam": "ipv4", 00:19:38.051 "trsvcid": "$NVMF_PORT", 00:19:38.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.051 "hdgst": ${hdgst:-false}, 00:19:38.051 "ddgst": ${ddgst:-false} 00:19:38.051 }, 00:19:38.051 "method": "bdev_nvme_attach_controller" 00:19:38.051 } 00:19:38.051 EOF 00:19:38.051 )") 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.051 { 00:19:38.051 "params": { 00:19:38.051 "name": "Nvme$subsystem", 00:19:38.051 "trtype": "$TEST_TRANSPORT", 00:19:38.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.051 "adrfam": "ipv4", 00:19:38.051 "trsvcid": "$NVMF_PORT", 00:19:38.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.051 "hdgst": ${hdgst:-false}, 00:19:38.051 "ddgst": ${ddgst:-false} 00:19:38.051 }, 00:19:38.051 "method": "bdev_nvme_attach_controller" 00:19:38.051 } 00:19:38.051 EOF 00:19:38.051 )") 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.051 { 00:19:38.051 "params": { 00:19:38.051 "name": "Nvme$subsystem", 00:19:38.051 "trtype": "$TEST_TRANSPORT", 00:19:38.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.051 "adrfam": "ipv4", 00:19:38.051 "trsvcid": "$NVMF_PORT", 00:19:38.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.051 "hdgst": ${hdgst:-false}, 00:19:38.051 "ddgst": ${ddgst:-false} 00:19:38.051 }, 00:19:38.051 "method": "bdev_nvme_attach_controller" 00:19:38.051 } 00:19:38.051 EOF 00:19:38.051 )") 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.051 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.051 { 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme$subsystem", 00:19:38.052 "trtype": "$TEST_TRANSPORT", 00:19:38.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "$NVMF_PORT", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.052 "hdgst": ${hdgst:-false}, 00:19:38.052 "ddgst": ${ddgst:-false} 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 } 00:19:38.052 EOF 00:19:38.052 )") 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.052 { 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme$subsystem", 00:19:38.052 "trtype": "$TEST_TRANSPORT", 00:19:38.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "$NVMF_PORT", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.052 "hdgst": ${hdgst:-false}, 00:19:38.052 "ddgst": ${ddgst:-false} 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 } 00:19:38.052 EOF 00:19:38.052 )") 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.052 { 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme$subsystem", 00:19:38.052 "trtype": "$TEST_TRANSPORT", 00:19:38.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "$NVMF_PORT", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.052 "hdgst": ${hdgst:-false}, 00:19:38.052 "ddgst": ${ddgst:-false} 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 } 00:19:38.052 EOF 00:19:38.052 )") 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.052 { 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme$subsystem", 00:19:38.052 "trtype": "$TEST_TRANSPORT", 00:19:38.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "$NVMF_PORT", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.052 "hdgst": ${hdgst:-false}, 00:19:38.052 "ddgst": ${ddgst:-false} 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 } 00:19:38.052 EOF 00:19:38.052 )") 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.052 { 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme$subsystem", 00:19:38.052 "trtype": "$TEST_TRANSPORT", 00:19:38.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "$NVMF_PORT", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.052 "hdgst": ${hdgst:-false}, 00:19:38.052 "ddgst": ${ddgst:-false} 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 } 00:19:38.052 EOF 00:19:38.052 )") 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.052 { 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme$subsystem", 00:19:38.052 "trtype": "$TEST_TRANSPORT", 00:19:38.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "$NVMF_PORT", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.052 "hdgst": ${hdgst:-false}, 00:19:38.052 "ddgst": ${ddgst:-false} 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 } 00:19:38.052 EOF 00:19:38.052 )") 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.052 { 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme$subsystem", 00:19:38.052 "trtype": "$TEST_TRANSPORT", 00:19:38.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "$NVMF_PORT", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.052 "hdgst": ${hdgst:-false}, 00:19:38.052 "ddgst": ${ddgst:-false} 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 } 00:19:38.052 EOF 00:19:38.052 )") 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:38.052 22:43:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme1", 00:19:38.052 "trtype": "tcp", 00:19:38.052 "traddr": "10.0.0.2", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "4420", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.052 "hdgst": false, 00:19:38.052 "ddgst": false 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 },{ 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme2", 00:19:38.052 "trtype": "tcp", 00:19:38.052 "traddr": "10.0.0.2", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "4420", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:38.052 "hdgst": false, 00:19:38.052 "ddgst": false 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 },{ 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme3", 00:19:38.052 "trtype": "tcp", 00:19:38.052 "traddr": "10.0.0.2", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "4420", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:38.052 "hdgst": false, 00:19:38.052 "ddgst": false 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 },{ 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme4", 00:19:38.052 "trtype": "tcp", 00:19:38.052 "traddr": "10.0.0.2", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "4420", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:38.052 "hdgst": false, 00:19:38.052 "ddgst": false 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 },{ 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme5", 00:19:38.052 "trtype": "tcp", 00:19:38.052 "traddr": "10.0.0.2", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "4420", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:38.052 "hdgst": false, 00:19:38.052 "ddgst": false 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 },{ 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme6", 00:19:38.052 "trtype": "tcp", 00:19:38.052 "traddr": "10.0.0.2", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "4420", 00:19:38.052 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:38.052 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:38.052 "hdgst": false, 00:19:38.052 "ddgst": false 00:19:38.052 }, 00:19:38.052 "method": "bdev_nvme_attach_controller" 00:19:38.052 },{ 00:19:38.052 "params": { 00:19:38.052 "name": "Nvme7", 00:19:38.052 "trtype": "tcp", 00:19:38.052 "traddr": "10.0.0.2", 00:19:38.052 "adrfam": "ipv4", 00:19:38.052 "trsvcid": "4420", 00:19:38.053 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:38.053 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:38.053 "hdgst": false, 00:19:38.053 "ddgst": false 00:19:38.053 }, 00:19:38.053 "method": "bdev_nvme_attach_controller" 00:19:38.053 },{ 00:19:38.053 "params": { 00:19:38.053 "name": "Nvme8", 00:19:38.053 "trtype": "tcp", 00:19:38.053 "traddr": "10.0.0.2", 00:19:38.053 "adrfam": "ipv4", 00:19:38.053 "trsvcid": "4420", 00:19:38.053 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:38.053 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:38.053 "hdgst": false, 00:19:38.053 "ddgst": false 00:19:38.053 }, 00:19:38.053 "method": "bdev_nvme_attach_controller" 00:19:38.053 },{ 00:19:38.053 "params": { 00:19:38.053 "name": "Nvme9", 00:19:38.053 "trtype": "tcp", 00:19:38.053 "traddr": "10.0.0.2", 00:19:38.053 "adrfam": "ipv4", 00:19:38.053 "trsvcid": "4420", 00:19:38.053 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:38.053 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:38.053 "hdgst": false, 00:19:38.053 "ddgst": false 00:19:38.053 }, 00:19:38.053 "method": "bdev_nvme_attach_controller" 00:19:38.053 },{ 00:19:38.053 "params": { 00:19:38.053 "name": "Nvme10", 00:19:38.053 "trtype": "tcp", 00:19:38.053 "traddr": "10.0.0.2", 00:19:38.053 "adrfam": "ipv4", 00:19:38.053 "trsvcid": "4420", 00:19:38.053 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:38.053 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:38.053 "hdgst": false, 00:19:38.053 "ddgst": false 00:19:38.053 }, 00:19:38.053 "method": "bdev_nvme_attach_controller" 00:19:38.053 }' 00:19:38.053 [2024-07-15 22:43:21.522451] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:19:38.053 [2024-07-15 22:43:21.522525] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:38.316 [2024-07-15 22:43:21.585901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.316 [2024-07-15 22:43:21.695727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.209 22:43:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:40.209 22:43:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # return 0 00:19:40.209 22:43:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:40.209 22:43:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:40.209 22:43:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:40.209 22:43:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:40.209 22:43:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1300354 00:19:40.209 22:43:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:40.209 22:43:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:41.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1300354 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1300049 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.141 { 00:19:41.141 "params": { 00:19:41.141 "name": "Nvme$subsystem", 00:19:41.141 "trtype": "$TEST_TRANSPORT", 00:19:41.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.141 "adrfam": "ipv4", 00:19:41.141 "trsvcid": "$NVMF_PORT", 00:19:41.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.141 "hdgst": ${hdgst:-false}, 00:19:41.141 "ddgst": ${ddgst:-false} 00:19:41.141 }, 00:19:41.141 "method": "bdev_nvme_attach_controller" 00:19:41.141 } 00:19:41.141 EOF 00:19:41.141 )") 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.141 { 00:19:41.141 "params": { 00:19:41.141 "name": "Nvme$subsystem", 00:19:41.141 "trtype": "$TEST_TRANSPORT", 00:19:41.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.141 "adrfam": "ipv4", 00:19:41.141 "trsvcid": "$NVMF_PORT", 00:19:41.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.141 "hdgst": ${hdgst:-false}, 00:19:41.141 "ddgst": ${ddgst:-false} 00:19:41.141 }, 00:19:41.141 "method": "bdev_nvme_attach_controller" 00:19:41.141 } 00:19:41.141 EOF 00:19:41.141 )") 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.141 { 00:19:41.141 "params": { 00:19:41.141 "name": "Nvme$subsystem", 00:19:41.141 "trtype": "$TEST_TRANSPORT", 00:19:41.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.141 "adrfam": "ipv4", 00:19:41.141 "trsvcid": "$NVMF_PORT", 00:19:41.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.141 "hdgst": ${hdgst:-false}, 00:19:41.141 "ddgst": ${ddgst:-false} 00:19:41.141 }, 00:19:41.141 "method": "bdev_nvme_attach_controller" 00:19:41.141 } 00:19:41.141 EOF 00:19:41.141 )") 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.141 { 00:19:41.141 "params": { 00:19:41.141 "name": "Nvme$subsystem", 00:19:41.141 "trtype": "$TEST_TRANSPORT", 00:19:41.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.141 "adrfam": "ipv4", 00:19:41.141 "trsvcid": "$NVMF_PORT", 00:19:41.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.141 "hdgst": ${hdgst:-false}, 00:19:41.141 "ddgst": ${ddgst:-false} 00:19:41.141 }, 00:19:41.141 "method": "bdev_nvme_attach_controller" 00:19:41.141 } 00:19:41.141 EOF 00:19:41.141 )") 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.141 { 00:19:41.141 "params": { 00:19:41.141 "name": "Nvme$subsystem", 00:19:41.141 "trtype": "$TEST_TRANSPORT", 00:19:41.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.141 "adrfam": "ipv4", 00:19:41.141 "trsvcid": "$NVMF_PORT", 00:19:41.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.141 "hdgst": ${hdgst:-false}, 00:19:41.141 "ddgst": ${ddgst:-false} 00:19:41.141 }, 00:19:41.141 "method": "bdev_nvme_attach_controller" 00:19:41.141 } 00:19:41.141 EOF 00:19:41.141 )") 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.141 { 00:19:41.141 "params": { 00:19:41.141 "name": "Nvme$subsystem", 00:19:41.141 "trtype": "$TEST_TRANSPORT", 00:19:41.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.141 "adrfam": "ipv4", 00:19:41.141 "trsvcid": "$NVMF_PORT", 00:19:41.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.141 "hdgst": ${hdgst:-false}, 00:19:41.141 "ddgst": ${ddgst:-false} 00:19:41.141 }, 00:19:41.141 "method": "bdev_nvme_attach_controller" 00:19:41.141 } 00:19:41.141 EOF 00:19:41.141 )") 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.141 { 00:19:41.141 "params": { 00:19:41.141 "name": "Nvme$subsystem", 00:19:41.141 "trtype": "$TEST_TRANSPORT", 00:19:41.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.141 "adrfam": "ipv4", 00:19:41.141 "trsvcid": "$NVMF_PORT", 00:19:41.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.141 "hdgst": ${hdgst:-false}, 00:19:41.141 "ddgst": ${ddgst:-false} 00:19:41.141 }, 00:19:41.141 "method": "bdev_nvme_attach_controller" 00:19:41.141 } 00:19:41.141 EOF 00:19:41.141 )") 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.141 { 00:19:41.141 "params": { 00:19:41.141 "name": "Nvme$subsystem", 00:19:41.141 "trtype": "$TEST_TRANSPORT", 00:19:41.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.141 "adrfam": "ipv4", 00:19:41.141 "trsvcid": "$NVMF_PORT", 00:19:41.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.141 "hdgst": ${hdgst:-false}, 00:19:41.141 "ddgst": ${ddgst:-false} 00:19:41.141 }, 00:19:41.141 "method": "bdev_nvme_attach_controller" 00:19:41.141 } 00:19:41.141 EOF 00:19:41.141 )") 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.141 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.141 { 00:19:41.141 "params": { 00:19:41.141 "name": "Nvme$subsystem", 00:19:41.141 "trtype": "$TEST_TRANSPORT", 00:19:41.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.141 "adrfam": "ipv4", 00:19:41.141 "trsvcid": "$NVMF_PORT", 00:19:41.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.141 "hdgst": ${hdgst:-false}, 00:19:41.142 "ddgst": ${ddgst:-false} 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 } 00:19:41.142 EOF 00:19:41.142 )") 00:19:41.142 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.142 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.142 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.142 { 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme$subsystem", 00:19:41.142 "trtype": "$TEST_TRANSPORT", 00:19:41.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "$NVMF_PORT", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.142 "hdgst": ${hdgst:-false}, 00:19:41.142 "ddgst": ${ddgst:-false} 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 } 00:19:41.142 EOF 00:19:41.142 )") 00:19:41.142 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:41.142 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:41.142 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:41.142 22:43:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme1", 00:19:41.142 "trtype": "tcp", 00:19:41.142 "traddr": "10.0.0.2", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "4420", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.142 "hdgst": false, 00:19:41.142 "ddgst": false 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 },{ 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme2", 00:19:41.142 "trtype": "tcp", 00:19:41.142 "traddr": "10.0.0.2", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "4420", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:41.142 "hdgst": false, 00:19:41.142 "ddgst": false 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 },{ 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme3", 00:19:41.142 "trtype": "tcp", 00:19:41.142 "traddr": "10.0.0.2", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "4420", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:41.142 "hdgst": false, 00:19:41.142 "ddgst": false 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 },{ 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme4", 00:19:41.142 "trtype": "tcp", 00:19:41.142 "traddr": "10.0.0.2", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "4420", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:41.142 "hdgst": false, 00:19:41.142 "ddgst": false 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 },{ 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme5", 00:19:41.142 "trtype": "tcp", 00:19:41.142 "traddr": "10.0.0.2", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "4420", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:41.142 "hdgst": false, 00:19:41.142 "ddgst": false 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 },{ 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme6", 00:19:41.142 "trtype": "tcp", 00:19:41.142 "traddr": "10.0.0.2", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "4420", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:41.142 "hdgst": false, 00:19:41.142 "ddgst": false 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 },{ 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme7", 00:19:41.142 "trtype": "tcp", 00:19:41.142 "traddr": "10.0.0.2", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "4420", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:41.142 "hdgst": false, 00:19:41.142 "ddgst": false 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 },{ 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme8", 00:19:41.142 "trtype": "tcp", 00:19:41.142 "traddr": "10.0.0.2", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "4420", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:41.142 "hdgst": false, 00:19:41.142 "ddgst": false 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 },{ 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme9", 00:19:41.142 "trtype": "tcp", 00:19:41.142 "traddr": "10.0.0.2", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "4420", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:41.142 "hdgst": false, 00:19:41.142 "ddgst": false 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 },{ 00:19:41.142 "params": { 00:19:41.142 "name": "Nvme10", 00:19:41.142 "trtype": "tcp", 00:19:41.142 "traddr": "10.0.0.2", 00:19:41.142 "adrfam": "ipv4", 00:19:41.142 "trsvcid": "4420", 00:19:41.142 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:41.142 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:41.142 "hdgst": false, 00:19:41.142 "ddgst": false 00:19:41.142 }, 00:19:41.142 "method": "bdev_nvme_attach_controller" 00:19:41.142 }' 00:19:41.142 [2024-07-15 22:43:24.537972] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:19:41.142 [2024-07-15 22:43:24.538061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300661 ] 00:19:41.142 [2024-07-15 22:43:24.603899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.400 [2024-07-15 22:43:24.716948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.770 Running I/O for 1 seconds... 00:19:44.140 00:19:44.140 Latency(us) 00:19:44.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.140 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.140 Verification LBA range: start 0x0 length 0x400 00:19:44.140 Nvme1n1 : 1.07 243.60 15.22 0.00 0.00 259672.61 5437.06 237677.23 00:19:44.140 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.140 Verification LBA range: start 0x0 length 0x400 00:19:44.140 Nvme2n1 : 1.09 235.76 14.74 0.00 0.00 263916.85 23690.05 234570.33 00:19:44.140 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.140 Verification LBA range: start 0x0 length 0x400 00:19:44.140 Nvme3n1 : 1.08 236.36 14.77 0.00 0.00 258776.56 21359.88 248551.35 00:19:44.140 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.140 Verification LBA range: start 0x0 length 0x400 00:19:44.140 Nvme4n1 : 1.07 240.15 15.01 0.00 0.00 249256.58 16505.36 253211.69 00:19:44.140 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.140 Verification LBA range: start 0x0 length 0x400 00:19:44.140 Nvme5n1 : 1.11 229.76 14.36 0.00 0.00 257572.98 21651.15 251658.24 00:19:44.140 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.140 Verification LBA range: start 0x0 length 0x400 00:19:44.140 Nvme6n1 : 1.12 228.49 14.28 0.00 0.00 254063.88 21942.42 246997.90 00:19:44.140 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.140 Verification LBA range: start 0x0 length 0x400 00:19:44.140 Nvme7n1 : 1.19 269.45 16.84 0.00 0.00 213373.53 15243.19 250104.79 00:19:44.140 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.140 Verification LBA range: start 0x0 length 0x400 00:19:44.140 Nvme8n1 : 1.20 266.25 16.64 0.00 0.00 212311.00 4975.88 256318.58 00:19:44.140 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.140 Verification LBA range: start 0x0 length 0x400 00:19:44.140 Nvme9n1 : 1.19 267.97 16.75 0.00 0.00 207654.46 12913.02 257872.02 00:19:44.140 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.140 Verification LBA range: start 0x0 length 0x400 00:19:44.140 Nvme10n1 : 1.18 221.25 13.83 0.00 0.00 246082.97 1881.13 284280.60 00:19:44.140 =================================================================================================================== 00:19:44.140 Total : 2439.03 152.44 0.00 0.00 240138.66 1881.13 284280.60 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.140 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.140 rmmod nvme_tcp 00:19:44.399 rmmod nvme_fabrics 00:19:44.399 rmmod nvme_keyring 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1300049 ']' 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1300049 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@942 -- # '[' -z 1300049 ']' 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # kill -0 1300049 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # uname 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1300049 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1300049' 00:19:44.399 killing process with pid 1300049 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@961 -- # kill 1300049 00:19:44.399 22:43:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # wait 1300049 00:19:44.965 22:43:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.965 22:43:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.965 22:43:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.966 22:43:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.966 22:43:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.966 22:43:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.966 22:43:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.966 22:43:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:46.871 00:19:46.871 real 0m12.595s 00:19:46.871 user 0m37.461s 00:19:46.871 sys 0m3.246s 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.871 ************************************ 00:19:46.871 END TEST nvmf_shutdown_tc1 00:19:46.871 ************************************ 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:46.871 ************************************ 00:19:46.871 START TEST nvmf_shutdown_tc2 00:19:46.871 ************************************ 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc2 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:46.871 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:46.871 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:46.871 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:46.872 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:46.872 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:46.872 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:47.130 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:47.130 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.130 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.130 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.130 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:47.130 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.130 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.130 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:47.130 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:47.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:47.131 00:19:47.131 --- 10.0.0.2 ping statistics --- 00:19:47.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.131 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:19:47.131 00:19:47.131 --- 10.0.0.1 ping statistics --- 00:19:47.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.131 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1301542 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1301542 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1301542 ']' 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:47.131 22:43:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.131 [2024-07-15 22:43:30.574649] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:19:47.131 [2024-07-15 22:43:30.574734] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.389 [2024-07-15 22:43:30.638990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:47.389 [2024-07-15 22:43:30.746577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.389 [2024-07-15 22:43:30.746626] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.389 [2024-07-15 22:43:30.746653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.389 [2024-07-15 22:43:30.746664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.389 [2024-07-15 22:43:30.746673] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.389 [2024-07-15 22:43:30.746726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.389 [2024-07-15 22:43:30.746792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.389 [2024-07-15 22:43:30.746850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:47.389 [2024-07-15 22:43:30.746852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # return 0 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.324 [2024-07-15 22:43:31.537953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.324 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:48.325 22:43:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.325 Malloc1 00:19:48.325 [2024-07-15 22:43:31.617790] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.325 Malloc2 00:19:48.325 Malloc3 00:19:48.325 Malloc4 00:19:48.325 Malloc5 00:19:48.583 Malloc6 00:19:48.583 Malloc7 00:19:48.583 Malloc8 00:19:48.583 Malloc9 00:19:48.583 Malloc10 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1301733 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1301733 /var/tmp/bdevperf.sock 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1301733 ']' 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.583 { 00:19:48.583 "params": { 00:19:48.583 "name": "Nvme$subsystem", 00:19:48.583 "trtype": "$TEST_TRANSPORT", 00:19:48.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.583 "adrfam": "ipv4", 00:19:48.583 "trsvcid": "$NVMF_PORT", 00:19:48.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.583 "hdgst": ${hdgst:-false}, 00:19:48.583 "ddgst": ${ddgst:-false} 00:19:48.583 }, 00:19:48.583 "method": "bdev_nvme_attach_controller" 00:19:48.583 } 00:19:48.583 EOF 00:19:48.583 )") 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.583 { 00:19:48.583 "params": { 00:19:48.583 "name": "Nvme$subsystem", 00:19:48.583 "trtype": "$TEST_TRANSPORT", 00:19:48.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.583 "adrfam": "ipv4", 00:19:48.583 "trsvcid": "$NVMF_PORT", 00:19:48.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.583 "hdgst": ${hdgst:-false}, 00:19:48.583 "ddgst": ${ddgst:-false} 00:19:48.583 }, 00:19:48.583 "method": "bdev_nvme_attach_controller" 00:19:48.583 } 00:19:48.583 EOF 00:19:48.583 )") 00:19:48.583 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.843 { 00:19:48.843 "params": { 00:19:48.843 "name": "Nvme$subsystem", 00:19:48.843 "trtype": "$TEST_TRANSPORT", 00:19:48.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.843 "adrfam": "ipv4", 00:19:48.843 "trsvcid": "$NVMF_PORT", 00:19:48.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.843 "hdgst": ${hdgst:-false}, 00:19:48.843 "ddgst": ${ddgst:-false} 00:19:48.843 }, 00:19:48.843 "method": "bdev_nvme_attach_controller" 00:19:48.843 } 00:19:48.843 EOF 00:19:48.843 )") 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.843 { 00:19:48.843 "params": { 00:19:48.843 "name": "Nvme$subsystem", 00:19:48.843 "trtype": "$TEST_TRANSPORT", 00:19:48.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.843 "adrfam": "ipv4", 00:19:48.843 "trsvcid": "$NVMF_PORT", 00:19:48.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.843 "hdgst": ${hdgst:-false}, 00:19:48.843 "ddgst": ${ddgst:-false} 00:19:48.843 }, 00:19:48.843 "method": "bdev_nvme_attach_controller" 00:19:48.843 } 00:19:48.843 EOF 00:19:48.843 )") 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.843 { 00:19:48.843 "params": { 00:19:48.843 "name": "Nvme$subsystem", 00:19:48.843 "trtype": "$TEST_TRANSPORT", 00:19:48.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.843 "adrfam": "ipv4", 00:19:48.843 "trsvcid": "$NVMF_PORT", 00:19:48.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.843 "hdgst": ${hdgst:-false}, 00:19:48.843 "ddgst": ${ddgst:-false} 00:19:48.843 }, 00:19:48.843 "method": "bdev_nvme_attach_controller" 00:19:48.843 } 00:19:48.843 EOF 00:19:48.843 )") 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.843 { 00:19:48.843 "params": { 00:19:48.843 "name": "Nvme$subsystem", 00:19:48.843 "trtype": "$TEST_TRANSPORT", 00:19:48.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.843 "adrfam": "ipv4", 00:19:48.843 "trsvcid": "$NVMF_PORT", 00:19:48.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.843 "hdgst": ${hdgst:-false}, 00:19:48.843 "ddgst": ${ddgst:-false} 00:19:48.843 }, 00:19:48.843 "method": "bdev_nvme_attach_controller" 00:19:48.843 } 00:19:48.843 EOF 00:19:48.843 )") 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.843 { 00:19:48.843 "params": { 00:19:48.843 "name": "Nvme$subsystem", 00:19:48.843 "trtype": "$TEST_TRANSPORT", 00:19:48.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.843 "adrfam": "ipv4", 00:19:48.843 "trsvcid": "$NVMF_PORT", 00:19:48.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.843 "hdgst": ${hdgst:-false}, 00:19:48.843 "ddgst": ${ddgst:-false} 00:19:48.843 }, 00:19:48.843 "method": "bdev_nvme_attach_controller" 00:19:48.843 } 00:19:48.843 EOF 00:19:48.843 )") 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.843 { 00:19:48.843 "params": { 00:19:48.843 "name": "Nvme$subsystem", 00:19:48.843 "trtype": "$TEST_TRANSPORT", 00:19:48.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.843 "adrfam": "ipv4", 00:19:48.843 "trsvcid": "$NVMF_PORT", 00:19:48.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.843 "hdgst": ${hdgst:-false}, 00:19:48.843 "ddgst": ${ddgst:-false} 00:19:48.843 }, 00:19:48.843 "method": "bdev_nvme_attach_controller" 00:19:48.843 } 00:19:48.843 EOF 00:19:48.843 )") 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.843 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.844 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.844 { 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme$subsystem", 00:19:48.844 "trtype": "$TEST_TRANSPORT", 00:19:48.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "$NVMF_PORT", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.844 "hdgst": ${hdgst:-false}, 00:19:48.844 "ddgst": ${ddgst:-false} 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 } 00:19:48.844 EOF 00:19:48.844 )") 00:19:48.844 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.844 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.844 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.844 { 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme$subsystem", 00:19:48.844 "trtype": "$TEST_TRANSPORT", 00:19:48.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "$NVMF_PORT", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.844 "hdgst": ${hdgst:-false}, 00:19:48.844 "ddgst": ${ddgst:-false} 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 } 00:19:48.844 EOF 00:19:48.844 )") 00:19:48.844 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:48.844 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:48.844 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:48.844 22:43:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme1", 00:19:48.844 "trtype": "tcp", 00:19:48.844 "traddr": "10.0.0.2", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "4420", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.844 "hdgst": false, 00:19:48.844 "ddgst": false 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 },{ 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme2", 00:19:48.844 "trtype": "tcp", 00:19:48.844 "traddr": "10.0.0.2", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "4420", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:48.844 "hdgst": false, 00:19:48.844 "ddgst": false 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 },{ 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme3", 00:19:48.844 "trtype": "tcp", 00:19:48.844 "traddr": "10.0.0.2", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "4420", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:48.844 "hdgst": false, 00:19:48.844 "ddgst": false 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 },{ 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme4", 00:19:48.844 "trtype": "tcp", 00:19:48.844 "traddr": "10.0.0.2", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "4420", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:48.844 "hdgst": false, 00:19:48.844 "ddgst": false 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 },{ 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme5", 00:19:48.844 "trtype": "tcp", 00:19:48.844 "traddr": "10.0.0.2", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "4420", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:48.844 "hdgst": false, 00:19:48.844 "ddgst": false 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 },{ 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme6", 00:19:48.844 "trtype": "tcp", 00:19:48.844 "traddr": "10.0.0.2", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "4420", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:48.844 "hdgst": false, 00:19:48.844 "ddgst": false 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 },{ 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme7", 00:19:48.844 "trtype": "tcp", 00:19:48.844 "traddr": "10.0.0.2", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "4420", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:48.844 "hdgst": false, 00:19:48.844 "ddgst": false 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 },{ 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme8", 00:19:48.844 "trtype": "tcp", 00:19:48.844 "traddr": "10.0.0.2", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "4420", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:48.844 "hdgst": false, 00:19:48.844 "ddgst": false 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 },{ 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme9", 00:19:48.844 "trtype": "tcp", 00:19:48.844 "traddr": "10.0.0.2", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "4420", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:48.844 "hdgst": false, 00:19:48.844 "ddgst": false 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 },{ 00:19:48.844 "params": { 00:19:48.844 "name": "Nvme10", 00:19:48.844 "trtype": "tcp", 00:19:48.844 "traddr": "10.0.0.2", 00:19:48.844 "adrfam": "ipv4", 00:19:48.844 "trsvcid": "4420", 00:19:48.844 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:48.844 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:48.844 "hdgst": false, 00:19:48.844 "ddgst": false 00:19:48.844 }, 00:19:48.844 "method": "bdev_nvme_attach_controller" 00:19:48.844 }' 00:19:48.844 [2024-07-15 22:43:32.120676] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:19:48.844 [2024-07-15 22:43:32.120753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301733 ] 00:19:48.844 [2024-07-15 22:43:32.185193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.844 [2024-07-15 22:43:32.294582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.782 Running I/O for 10 seconds... 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # return 0 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:50.782 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1301733 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@942 -- # '[' -z 1301733 ']' 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # kill -0 1301733 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # uname 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1301733 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1301733' 00:19:51.040 killing process with pid 1301733 00:19:51.040 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # kill 1301733 00:19:51.041 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # wait 1301733 00:19:51.299 Received shutdown signal, test time was about 0.746326 seconds 00:19:51.299 00:19:51.299 Latency(us) 00:19:51.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.299 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.299 Verification LBA range: start 0x0 length 0x400 00:19:51.299 Nvme1n1 : 0.74 260.71 16.29 0.00 0.00 241793.20 24175.50 295154.73 00:19:51.299 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.299 Verification LBA range: start 0x0 length 0x400 00:19:51.299 Nvme2n1 : 0.70 182.41 11.40 0.00 0.00 336200.82 21262.79 250104.79 00:19:51.299 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.299 Verification LBA range: start 0x0 length 0x400 00:19:51.299 Nvme3n1 : 0.75 257.55 16.10 0.00 0.00 232373.16 32816.55 234570.33 00:19:51.299 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.299 Verification LBA range: start 0x0 length 0x400 00:19:51.299 Nvme4n1 : 0.74 258.90 16.18 0.00 0.00 225008.96 20388.98 253211.69 00:19:51.299 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.299 Verification LBA range: start 0x0 length 0x400 00:19:51.299 Nvme5n1 : 0.73 262.58 16.41 0.00 0.00 215537.59 21845.33 236123.78 00:19:51.299 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.299 Verification LBA range: start 0x0 length 0x400 00:19:51.299 Nvme6n1 : 0.72 264.99 16.56 0.00 0.00 206853.06 35923.44 212822.09 00:19:51.299 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.299 Verification LBA range: start 0x0 length 0x400 00:19:51.299 Nvme7n1 : 0.73 175.79 10.99 0.00 0.00 304004.36 26214.40 309135.74 00:19:51.299 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.299 Verification LBA range: start 0x0 length 0x400 00:19:51.299 Nvme8n1 : 0.72 266.40 16.65 0.00 0.00 194122.33 19126.80 229910.00 00:19:51.299 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.299 Verification LBA range: start 0x0 length 0x400 00:19:51.299 Nvme9n1 : 0.69 184.70 11.54 0.00 0.00 266541.13 21359.88 253211.69 00:19:51.299 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.299 Verification LBA range: start 0x0 length 0x400 00:19:51.299 Nvme10n1 : 0.71 180.60 11.29 0.00 0.00 267647.05 21068.61 264085.81 00:19:51.299 =================================================================================================================== 00:19:51.299 Total : 2294.63 143.41 0.00 0.00 242148.14 19126.80 309135.74 00:19:51.557 22:43:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1301542 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.490 rmmod nvme_tcp 00:19:52.490 rmmod nvme_fabrics 00:19:52.490 rmmod nvme_keyring 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1301542 ']' 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1301542 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@942 -- # '[' -z 1301542 ']' 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # kill -0 1301542 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # uname 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1301542 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1301542' 00:19:52.490 killing process with pid 1301542 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # kill 1301542 00:19:52.490 22:43:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # wait 1301542 00:19:53.058 22:43:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.058 22:43:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.058 22:43:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.058 22:43:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.058 22:43:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.058 22:43:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.058 22:43:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.058 22:43:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.599 00:19:55.599 real 0m8.202s 00:19:55.599 user 0m25.097s 00:19:55.599 sys 0m1.461s 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.599 ************************************ 00:19:55.599 END TEST nvmf_shutdown_tc2 00:19:55.599 ************************************ 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:55.599 ************************************ 00:19:55.599 START TEST nvmf_shutdown_tc3 00:19:55.599 ************************************ 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc3 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:55.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:55.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.599 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:55.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:55.600 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:19:55.600 00:19:55.600 --- 10.0.0.2 ping statistics --- 00:19:55.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.600 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:19:55.600 00:19:55.600 --- 10.0.0.1 ping statistics --- 00:19:55.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.600 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1302635 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1302635 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@823 -- # '[' -z 1302635 ']' 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:55.600 22:43:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.600 [2024-07-15 22:43:38.815830] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:19:55.600 [2024-07-15 22:43:38.815932] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.600 [2024-07-15 22:43:38.880128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.600 [2024-07-15 22:43:38.988132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.600 [2024-07-15 22:43:38.988181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.600 [2024-07-15 22:43:38.988194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.600 [2024-07-15 22:43:38.988213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.600 [2024-07-15 22:43:38.988223] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.600 [2024-07-15 22:43:38.988274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.600 [2024-07-15 22:43:38.988336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.600 [2024-07-15 22:43:38.988401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:55.600 [2024-07-15 22:43:38.988404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # return 0 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.534 [2024-07-15 22:43:39.768973] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.534 22:43:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.534 Malloc1 00:19:56.534 [2024-07-15 22:43:39.858224] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.534 Malloc2 00:19:56.534 Malloc3 00:19:56.534 Malloc4 00:19:56.534 Malloc5 00:19:56.792 Malloc6 00:19:56.792 Malloc7 00:19:56.792 Malloc8 00:19:56.792 Malloc9 00:19:57.049 Malloc10 00:19:57.049 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:57.049 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:57.049 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.049 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.049 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1302823 00:19:57.049 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1302823 /var/tmp/bdevperf.sock 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@823 -- # '[' -z 1302823 ']' 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.050 { 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme$subsystem", 00:19:57.050 "trtype": "$TEST_TRANSPORT", 00:19:57.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "$NVMF_PORT", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.050 "hdgst": ${hdgst:-false}, 00:19:57.050 "ddgst": ${ddgst:-false} 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 } 00:19:57.050 EOF 00:19:57.050 )") 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.050 { 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme$subsystem", 00:19:57.050 "trtype": "$TEST_TRANSPORT", 00:19:57.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "$NVMF_PORT", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.050 "hdgst": ${hdgst:-false}, 00:19:57.050 "ddgst": ${ddgst:-false} 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 } 00:19:57.050 EOF 00:19:57.050 )") 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.050 { 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme$subsystem", 00:19:57.050 "trtype": "$TEST_TRANSPORT", 00:19:57.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "$NVMF_PORT", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.050 "hdgst": ${hdgst:-false}, 00:19:57.050 "ddgst": ${ddgst:-false} 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 } 00:19:57.050 EOF 00:19:57.050 )") 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.050 { 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme$subsystem", 00:19:57.050 "trtype": "$TEST_TRANSPORT", 00:19:57.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "$NVMF_PORT", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.050 "hdgst": ${hdgst:-false}, 00:19:57.050 "ddgst": ${ddgst:-false} 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 } 00:19:57.050 EOF 00:19:57.050 )") 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.050 { 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme$subsystem", 00:19:57.050 "trtype": "$TEST_TRANSPORT", 00:19:57.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "$NVMF_PORT", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.050 "hdgst": ${hdgst:-false}, 00:19:57.050 "ddgst": ${ddgst:-false} 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 } 00:19:57.050 EOF 00:19:57.050 )") 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.050 { 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme$subsystem", 00:19:57.050 "trtype": "$TEST_TRANSPORT", 00:19:57.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "$NVMF_PORT", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.050 "hdgst": ${hdgst:-false}, 00:19:57.050 "ddgst": ${ddgst:-false} 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 } 00:19:57.050 EOF 00:19:57.050 )") 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.050 { 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme$subsystem", 00:19:57.050 "trtype": "$TEST_TRANSPORT", 00:19:57.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "$NVMF_PORT", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.050 "hdgst": ${hdgst:-false}, 00:19:57.050 "ddgst": ${ddgst:-false} 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 } 00:19:57.050 EOF 00:19:57.050 )") 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.050 { 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme$subsystem", 00:19:57.050 "trtype": "$TEST_TRANSPORT", 00:19:57.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "$NVMF_PORT", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.050 "hdgst": ${hdgst:-false}, 00:19:57.050 "ddgst": ${ddgst:-false} 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 } 00:19:57.050 EOF 00:19:57.050 )") 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.050 { 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme$subsystem", 00:19:57.050 "trtype": "$TEST_TRANSPORT", 00:19:57.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "$NVMF_PORT", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.050 "hdgst": ${hdgst:-false}, 00:19:57.050 "ddgst": ${ddgst:-false} 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 } 00:19:57.050 EOF 00:19:57.050 )") 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.050 { 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme$subsystem", 00:19:57.050 "trtype": "$TEST_TRANSPORT", 00:19:57.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "$NVMF_PORT", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.050 "hdgst": ${hdgst:-false}, 00:19:57.050 "ddgst": ${ddgst:-false} 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 } 00:19:57.050 EOF 00:19:57.050 )") 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:57.050 22:43:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme1", 00:19:57.050 "trtype": "tcp", 00:19:57.050 "traddr": "10.0.0.2", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "4420", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.050 "hdgst": false, 00:19:57.050 "ddgst": false 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 },{ 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme2", 00:19:57.050 "trtype": "tcp", 00:19:57.050 "traddr": "10.0.0.2", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "4420", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:57.050 "hdgst": false, 00:19:57.050 "ddgst": false 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 },{ 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme3", 00:19:57.050 "trtype": "tcp", 00:19:57.050 "traddr": "10.0.0.2", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "4420", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:57.050 "hdgst": false, 00:19:57.050 "ddgst": false 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 },{ 00:19:57.050 "params": { 00:19:57.050 "name": "Nvme4", 00:19:57.050 "trtype": "tcp", 00:19:57.050 "traddr": "10.0.0.2", 00:19:57.050 "adrfam": "ipv4", 00:19:57.050 "trsvcid": "4420", 00:19:57.050 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:57.050 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:57.050 "hdgst": false, 00:19:57.050 "ddgst": false 00:19:57.050 }, 00:19:57.050 "method": "bdev_nvme_attach_controller" 00:19:57.050 },{ 00:19:57.051 "params": { 00:19:57.051 "name": "Nvme5", 00:19:57.051 "trtype": "tcp", 00:19:57.051 "traddr": "10.0.0.2", 00:19:57.051 "adrfam": "ipv4", 00:19:57.051 "trsvcid": "4420", 00:19:57.051 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:57.051 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:57.051 "hdgst": false, 00:19:57.051 "ddgst": false 00:19:57.051 }, 00:19:57.051 "method": "bdev_nvme_attach_controller" 00:19:57.051 },{ 00:19:57.051 "params": { 00:19:57.051 "name": "Nvme6", 00:19:57.051 "trtype": "tcp", 00:19:57.051 "traddr": "10.0.0.2", 00:19:57.051 "adrfam": "ipv4", 00:19:57.051 "trsvcid": "4420", 00:19:57.051 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:57.051 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:57.051 "hdgst": false, 00:19:57.051 "ddgst": false 00:19:57.051 }, 00:19:57.051 "method": "bdev_nvme_attach_controller" 00:19:57.051 },{ 00:19:57.051 "params": { 00:19:57.051 "name": "Nvme7", 00:19:57.051 "trtype": "tcp", 00:19:57.051 "traddr": "10.0.0.2", 00:19:57.051 "adrfam": "ipv4", 00:19:57.051 "trsvcid": "4420", 00:19:57.051 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:57.051 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:57.051 "hdgst": false, 00:19:57.051 "ddgst": false 00:19:57.051 }, 00:19:57.051 "method": "bdev_nvme_attach_controller" 00:19:57.051 },{ 00:19:57.051 "params": { 00:19:57.051 "name": "Nvme8", 00:19:57.051 "trtype": "tcp", 00:19:57.051 "traddr": "10.0.0.2", 00:19:57.051 "adrfam": "ipv4", 00:19:57.051 "trsvcid": "4420", 00:19:57.051 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:57.051 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:57.051 "hdgst": false, 00:19:57.051 "ddgst": false 00:19:57.051 }, 00:19:57.051 "method": "bdev_nvme_attach_controller" 00:19:57.051 },{ 00:19:57.051 "params": { 00:19:57.051 "name": "Nvme9", 00:19:57.051 "trtype": "tcp", 00:19:57.051 "traddr": "10.0.0.2", 00:19:57.051 "adrfam": "ipv4", 00:19:57.051 "trsvcid": "4420", 00:19:57.051 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:57.051 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:57.051 "hdgst": false, 00:19:57.051 "ddgst": false 00:19:57.051 }, 00:19:57.051 "method": "bdev_nvme_attach_controller" 00:19:57.051 },{ 00:19:57.051 "params": { 00:19:57.051 "name": "Nvme10", 00:19:57.051 "trtype": "tcp", 00:19:57.051 "traddr": "10.0.0.2", 00:19:57.051 "adrfam": "ipv4", 00:19:57.051 "trsvcid": "4420", 00:19:57.051 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:57.051 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:57.051 "hdgst": false, 00:19:57.051 "ddgst": false 00:19:57.051 }, 00:19:57.051 "method": "bdev_nvme_attach_controller" 00:19:57.051 }' 00:19:57.051 [2024-07-15 22:43:40.390750] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:19:57.051 [2024-07-15 22:43:40.390829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302823 ] 00:19:57.051 [2024-07-15 22:43:40.457562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.307 [2024-07-15 22:43:40.569505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.200 Running I/O for 10 seconds... 00:19:59.200 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:59.200 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # return 0 00:19:59.200 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:59.200 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:59.200 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:59.200 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:59.200 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:59.201 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:59.458 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:59.458 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:59.458 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:59.458 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:59.458 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:59.458 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:59.458 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:59.458 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:59.458 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:59.458 22:43:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:59.716 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:59.716 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:59.716 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:59.716 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:59.716 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:59.716 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1302635 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@942 -- # '[' -z 1302635 ']' 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # kill -0 1302635 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # uname 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1302635 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1302635' 00:19:59.990 killing process with pid 1302635 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@961 -- # kill 1302635 00:19:59.990 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # wait 1302635 00:19:59.990 [2024-07-15 22:43:43.267406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.267986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.268351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934a80 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.269662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937480 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.269695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937480 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.269710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937480 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.990 [2024-07-15 22:43:43.271754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.271964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934f20 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.272248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.272980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.272996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with [2024-07-15 22:43:43.273617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:1the state(5) to be set 00:19:59.991 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.273648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.273662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.273678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:1[2024-07-15 22:43:43.273679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.273693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 22:43:43.273694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.273708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.273710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.991 [2024-07-15 22:43:43.273721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.273725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.991 [2024-07-15 22:43:43.273734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.991 [2024-07-15 22:43:43.273740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.273747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.273760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:1[2024-07-15 22:43:43.273773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 22:43:43.273786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with [2024-07-15 22:43:43.273803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:1the state(5) to be set 00:19:59.992 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.273821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 22:43:43.273820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.273849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.273863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.273882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.273897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.273911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.273935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.273948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.273961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.273974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.273987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.273998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.274000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.274017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.274030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.274043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with [2024-07-15 22:43:43.274056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:1the state(5) to be set 00:19:59.992 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.274071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with [2024-07-15 22:43:43.274072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:59.992 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.274086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.274100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.274113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.274126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.274139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.274152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.274165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.274199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.274215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.992 [2024-07-15 22:43:43.274228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.274248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devi[2024-07-15 22:43:43.274285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with ce or address) on qpair id 1 00:19:59.992 the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19353c0 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.274874] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18cb590 was disconnected and freed. reset controller. 00:19:59.992 [2024-07-15 22:43:43.275003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.992 [2024-07-15 22:43:43.275026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.275042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.992 [2024-07-15 22:43:43.275055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.275069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.992 [2024-07-15 22:43:43.275082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.275096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.992 [2024-07-15 22:43:43.275108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.275121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42240 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.275170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.992 [2024-07-15 22:43:43.275190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.275205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.992 [2024-07-15 22:43:43.275218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.275232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.992 [2024-07-15 22:43:43.275245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.275258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.992 [2024-07-15 22:43:43.275271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.275284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa2450 is same with the state(5) to be set 00:19:59.992 [2024-07-15 22:43:43.275376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.992 [2024-07-15 22:43:43.275397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.275412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.992 [2024-07-15 22:43:43.275425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.992 [2024-07-15 22:43:43.275439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.993 [2024-07-15 22:43:43.275452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.993 [2024-07-15 22:43:43.275465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.993 [2024-07-15 22:43:43.275482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.993 [2024-07-15 22:43:43.275496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7440 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.993 [2024-07-15 22:43:43.275562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.993 [2024-07-15 22:43:43.275576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.993 [2024-07-15 22:43:43.275590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.993 [2024-07-15 22:43:43.275604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.993 [2024-07-15 22:43:43.275617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.993 [2024-07-15 22:43:43.275631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.993 [2024-07-15 22:43:43.275643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.993 [2024-07-15 22:43:43.275656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76830 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.275997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.276614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935880 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.277998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.993 [2024-07-15 22:43:43.278167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.278315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935d20 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.279829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361e0 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.281862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936680 is same with the state(5) to be set 00:19:59.994 [2024-07-15 22:43:43.282959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:59.994 [2024-07-15 22:43:43.283007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7440 (9): Bad file descriptor 00:19:59.994 [2024-07-15 22:43:43.283669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.994 [2024-07-15 22:43:43.283695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.994 [2024-07-15 22:43:43.283716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.994 [2024-07-15 22:43:43.283733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.994 [2024-07-15 22:43:43.283749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.994 [2024-07-15 22:43:43.283763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.994 [2024-07-15 22:43:43.283778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.994 [2024-07-15 22:43:43.283792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.994 [2024-07-15 22:43:43.283807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.994 [2024-07-15 22:43:43.283821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.994 [2024-07-15 22:43:43.283837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.994 [2024-07-15 22:43:43.283851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.994 [2024-07-15 22:43:43.283866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.283888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.283906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.283920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.283943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.283957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.283972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.283986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.284977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.284992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.995 [2024-07-15 22:43:43.285547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.995 [2024-07-15 22:43:43.285577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:59.995 [2024-07-15 22:43:43.285651] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11fc760 was disconnected and freed. reset controller. 00:19:59.995 [2024-07-15 22:43:43.285970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42240 (9): Bad file descriptor 00:19:59.996 [2024-07-15 22:43:43.286007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa2450 (9): Bad file descriptor 00:19:59.996 [2024-07-15 22:43:43.286061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98c60 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.286221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99280 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.286383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x578610 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.286542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.996 [2024-07-15 22:43:43.286642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.996 [2024-07-15 22:43:43.286655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3a990 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.286704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa76830 (9): Bad file descriptor 00:19:59.996 [2024-07-15 22:43:43.288495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controlle[2024-07-15 22:43:43.288603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with r 00:19:59.996 the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x578610 (9): Bad file descriptor 00:19:59.996 [2024-07-15 22:43:43.288641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.996 [2024-07-15 22:43:43.288889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa7440 with addr=10.0.0.2, port=4420 00:19:59.996 [2024-07-15 22:43:43.288934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7440 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.288987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936b20 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.289446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7440 (9): Bad file descriptor 00:19:59.996 [2024-07-15 22:43:43.289516] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:59.996 [2024-07-15 22:43:43.289638] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:59.996 [2024-07-15 22:43:43.290056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.996 [2024-07-15 22:43:43.290394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with [2024-07-15 22:43:43.290412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.997 the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578610 with addr=10.0.0.2, port=4420 00:19:59.997 [2024-07-15 22:43:43.290445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x578610 is same w[2024-07-15 22:43:43.290458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with ith the state(5) to be set 00:19:59.997 the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:59.997 [2024-07-15 22:43:43.290485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:59.997 [2024-07-15 22:43:43.290498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:59.997 [2024-07-15 22:43:43.290511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290606] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:59.997 [2024-07-15 22:43:43.290612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290674] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:59.997 [2024-07-15 22:43:43.290688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290751] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:59.997 [2024-07-15 22:43:43.290760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936fc0 is same with the state(5) to be set 00:19:59.997 [2024-07-15 22:43:43.290900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.997 [2024-07-15 22:43:43.290946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x578610 (9): Bad file descriptor 00:19:59.997 [2024-07-15 22:43:43.290998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.291974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.291989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.997 [2024-07-15 22:43:43.292378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.997 [2024-07-15 22:43:43.292393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.292908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.292922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0d440 is same with the state(5) to be set 00:19:59.998 [2024-07-15 22:43:43.292996] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc0d440 was disconnected and freed. reset controller. 00:19:59.998 [2024-07-15 22:43:43.293181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:59.998 [2024-07-15 22:43:43.293205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:59.998 [2024-07-15 22:43:43.293219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:59.998 [2024-07-15 22:43:43.294515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.998 [2024-07-15 22:43:43.294540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:59.998 [2024-07-15 22:43:43.294656] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:59.998 [2024-07-15 22:43:43.294897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.998 [2024-07-15 22:43:43.294933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42240 with addr=10.0.0.2, port=4420 00:19:59.998 [2024-07-15 22:43:43.294950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42240 is same with the state(5) to be set 00:19:59.998 [2024-07-15 22:43:43.295345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42240 (9): Bad file descriptor 00:19:59.998 [2024-07-15 22:43:43.295447] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:59.998 [2024-07-15 22:43:43.295475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:59.998 [2024-07-15 22:43:43.295490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:59.998 [2024-07-15 22:43:43.295503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:59.998 [2024-07-15 22:43:43.295581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.998 [2024-07-15 22:43:43.295988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa98c60 (9): Bad file descriptor 00:19:59.998 [2024-07-15 22:43:43.296023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa99280 (9): Bad file descriptor 00:19:59.998 [2024-07-15 22:43:43.296054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3a990 (9): Bad file descriptor 00:19:59.998 [2024-07-15 22:43:43.296106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.998 [2024-07-15 22:43:43.296126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.998 [2024-07-15 22:43:43.296155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.998 [2024-07-15 22:43:43.296187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.998 [2024-07-15 22:43:43.296214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb470c0 is same with the state(5) to be set 00:19:59.998 [2024-07-15 22:43:43.296274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.998 [2024-07-15 22:43:43.296295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.998 [2024-07-15 22:43:43.296322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.998 [2024-07-15 22:43:43.296348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.998 [2024-07-15 22:43:43.296374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3ab70 is same with the state(5) to be set 00:19:59.998 [2024-07-15 22:43:43.296519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.296974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.296987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.998 [2024-07-15 22:43:43.297311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.998 [2024-07-15 22:43:43.297327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.297974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.297990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.298450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.298463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae7fd0 is same with the state(5) to be set 00:19:59.999 [2024-07-15 22:43:43.299763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.299787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.299808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.299823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.299839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.299852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.299868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.299893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.299910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.299935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.299951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.299964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.299980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.299995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.999 [2024-07-15 22:43:43.300655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.999 [2024-07-15 22:43:43.300673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.300689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.300703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.300723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.300736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.300752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.300766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.300787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.300800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.300816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.300830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.300845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.300858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.300874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.300896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.300921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.300936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.300951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.300964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.300980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.300993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.301697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.301711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e7b0 is same with the state(5) to be set 00:20:00.000 [2024-07-15 22:43:43.303029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:00.000 [2024-07-15 22:43:43.303059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:00.000 [2024-07-15 22:43:43.303221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:00.000 [2024-07-15 22:43:43.303523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.000 [2024-07-15 22:43:43.303553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa76830 with addr=10.0.0.2, port=4420 00:20:00.000 [2024-07-15 22:43:43.303569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76830 is same with the state(5) to be set 00:20:00.000 [2024-07-15 22:43:43.303750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.000 [2024-07-15 22:43:43.303781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa2450 with addr=10.0.0.2, port=4420 00:20:00.000 [2024-07-15 22:43:43.303797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa2450 is same with the state(5) to be set 00:20:00.000 [2024-07-15 22:43:43.304409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:00.000 [2024-07-15 22:43:43.304565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.000 [2024-07-15 22:43:43.304592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa7440 with addr=10.0.0.2, port=4420 00:20:00.000 [2024-07-15 22:43:43.304607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7440 is same with the state(5) to be set 00:20:00.000 [2024-07-15 22:43:43.304631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa76830 (9): Bad file descriptor 00:20:00.000 [2024-07-15 22:43:43.304650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa2450 (9): Bad file descriptor 00:20:00.000 [2024-07-15 22:43:43.304852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.000 [2024-07-15 22:43:43.304885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578610 with addr=10.0.0.2, port=4420 00:20:00.000 [2024-07-15 22:43:43.304903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x578610 is same with the state(5) to be set 00:20:00.000 [2024-07-15 22:43:43.304928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7440 (9): Bad file descriptor 00:20:00.000 [2024-07-15 22:43:43.304945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:00.000 [2024-07-15 22:43:43.304958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:00.000 [2024-07-15 22:43:43.304975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.000 [2024-07-15 22:43:43.305005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:00.000 [2024-07-15 22:43:43.305019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:00.000 [2024-07-15 22:43:43.305032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:00.000 [2024-07-15 22:43:43.305111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.000 [2024-07-15 22:43:43.305132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.000 [2024-07-15 22:43:43.305153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x578610 (9): Bad file descriptor 00:20:00.000 [2024-07-15 22:43:43.305171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:00.000 [2024-07-15 22:43:43.305184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:00.000 [2024-07-15 22:43:43.305196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:00.000 [2024-07-15 22:43:43.305246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:00.000 [2024-07-15 22:43:43.305267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.000 [2024-07-15 22:43:43.305290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:00.000 [2024-07-15 22:43:43.305304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:00.000 [2024-07-15 22:43:43.305317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:00.000 [2024-07-15 22:43:43.305365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.000 [2024-07-15 22:43:43.305516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.000 [2024-07-15 22:43:43.305541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42240 with addr=10.0.0.2, port=4420 00:20:00.000 [2024-07-15 22:43:43.305557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42240 is same with the state(5) to be set 00:20:00.000 [2024-07-15 22:43:43.305607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42240 (9): Bad file descriptor 00:20:00.000 [2024-07-15 22:43:43.305655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:00.000 [2024-07-15 22:43:43.305672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:00.000 [2024-07-15 22:43:43.305684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:00.000 [2024-07-15 22:43:43.305732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.000 [2024-07-15 22:43:43.306035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb470c0 (9): Bad file descriptor 00:20:00.000 [2024-07-15 22:43:43.306071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3ab70 (9): Bad file descriptor 00:20:00.000 [2024-07-15 22:43:43.306193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.000 [2024-07-15 22:43:43.306634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.000 [2024-07-15 22:43:43.306648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.306677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.306707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.306735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.306764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.306793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.306822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.306857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.306902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.306951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.306981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.306996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.307975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.307989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.308005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.308022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.308038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.308052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.308068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.308081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.308097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.308110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.308125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.308139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.308154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.308167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.308181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa70440 is same with the state(5) to be set 00:20:00.001 [2024-07-15 22:43:43.309454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.309497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.309527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.309556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.309585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.309614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.309643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.309677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.309706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.309735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.001 [2024-07-15 22:43:43.309764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.001 [2024-07-15 22:43:43.309778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.309793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.309807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.309822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.309836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.309851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.309865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.309886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.309902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.309918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.309932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.309948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.309961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.309976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.309990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.310979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.310993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.311377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.311390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71910 is same with the state(5) to be set 00:20:00.002 [2024-07-15 22:43:43.312628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.312982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.312995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.313010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.313024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.313039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.313053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.313068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.313082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.313097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.313111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.313126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.313140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.313159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.313173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.313189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.313202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.313218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.313232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.313248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.313261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.002 [2024-07-15 22:43:43.313276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.002 [2024-07-15 22:43:43.313290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.313985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.313998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.314515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.314529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a40f0 is same with the state(5) to be set 00:20:00.003 [2024-07-15 22:43:43.315781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:00.003 [2024-07-15 22:43:43.315812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:00.003 [2024-07-15 22:43:43.315832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:00.003 [2024-07-15 22:43:43.316539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.003 [2024-07-15 22:43:43.316570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa98c60 with addr=10.0.0.2, port=4420 00:20:00.003 [2024-07-15 22:43:43.316587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98c60 is same with the state(5) to be set 00:20:00.003 [2024-07-15 22:43:43.316738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.003 [2024-07-15 22:43:43.316772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa99280 with addr=10.0.0.2, port=4420 00:20:00.003 [2024-07-15 22:43:43.316787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99280 is same with the state(5) to be set 00:20:00.003 [2024-07-15 22:43:43.316937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.003 [2024-07-15 22:43:43.316962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3a990 with addr=10.0.0.2, port=4420 00:20:00.003 [2024-07-15 22:43:43.316978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3a990 is same with the state(5) to be set 00:20:00.003 [2024-07-15 22:43:43.317821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:00.003 [2024-07-15 22:43:43.317847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:00.003 [2024-07-15 22:43:43.317863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:00.003 [2024-07-15 22:43:43.317886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:00.003 [2024-07-15 22:43:43.317904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:00.003 [2024-07-15 22:43:43.317969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa98c60 (9): Bad file descriptor 00:20:00.003 [2024-07-15 22:43:43.317993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa99280 (9): Bad file descriptor 00:20:00.003 [2024-07-15 22:43:43.318011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3a990 (9): Bad file descriptor 00:20:00.003 [2024-07-15 22:43:43.318099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.003 [2024-07-15 22:43:43.318613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.003 [2024-07-15 22:43:43.318627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.318977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.318993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.319978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.319993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.320006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.320021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154bad0 is same with the state(5) to be set 00:20:00.004 [2024-07-15 22:43:43.321270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.321974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.321987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.322003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.322016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.322031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.004 [2024-07-15 22:43:43.322045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.004 [2024-07-15 22:43:43.322060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.322975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.322991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.323004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.323019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.323033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.323047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.323061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.323076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.323089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.323104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.323117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.323132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.323145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.323161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.005 [2024-07-15 22:43:43.323174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.005 [2024-07-15 22:43:43.323188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723b20 is same with the state(5) to be set 00:20:00.005 [2024-07-15 22:43:43.324796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:00.005 task offset: 16768 on job bdev=Nvme10n1 fails 00:20:00.005 00:20:00.005 Latency(us) 00:20:00.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.005 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:00.005 Job: Nvme1n1 ended in about 0.86 seconds with error 00:20:00.005 Verification LBA range: start 0x0 length 0x400 00:20:00.005 Nvme1n1 : 0.86 148.16 9.26 74.08 0.00 284640.65 24563.86 274959.93 00:20:00.005 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:00.005 Job: Nvme2n1 ended in about 0.86 seconds with error 00:20:00.005 Verification LBA range: start 0x0 length 0x400 00:20:00.005 Nvme2n1 : 0.86 149.06 9.32 74.53 0.00 276864.57 17670.45 256318.58 00:20:00.005 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:00.005 Job: Nvme3n1 ended in about 0.87 seconds with error 00:20:00.005 Verification LBA range: start 0x0 length 0x400 00:20:00.005 Nvme3n1 : 0.87 227.18 14.20 73.80 0.00 201182.13 6262.33 251658.24 00:20:00.005 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:00.005 Job: Nvme4n1 ended in about 0.87 seconds with error 00:20:00.005 Verification LBA range: start 0x0 length 0x400 00:20:00.005 Nvme4n1 : 0.87 146.52 9.16 73.26 0.00 269664.46 21262.79 273406.48 00:20:00.005 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:00.005 Job: Nvme5n1 ended in about 0.88 seconds with error 00:20:00.005 Verification LBA range: start 0x0 length 0x400 00:20:00.005 Nvme5n1 : 0.88 145.99 9.12 72.99 0.00 264674.92 23010.42 246997.90 00:20:00.005 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:00.005 Job: Nvme6n1 ended in about 0.85 seconds with error 00:20:00.005 Verification LBA range: start 0x0 length 0x400 00:20:00.005 Nvme6n1 : 0.85 150.15 9.38 75.08 0.00 250475.33 5849.69 312242.63 00:20:00.005 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:00.005 Job: Nvme7n1 ended in about 0.88 seconds with error 00:20:00.005 Verification LBA range: start 0x0 length 0x400 00:20:00.005 Nvme7n1 : 0.88 145.47 9.09 72.73 0.00 253676.91 20000.62 236123.78 00:20:00.005 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:00.005 Job: Nvme8n1 ended in about 0.89 seconds with error 00:20:00.005 Verification LBA range: start 0x0 length 0x400 00:20:00.005 Nvme8n1 : 0.89 216.85 13.55 72.28 0.00 187191.18 22136.60 236123.78 00:20:00.005 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:00.005 Job: Nvme9n1 ended in about 0.89 seconds with error 00:20:00.005 Verification LBA range: start 0x0 length 0x400 00:20:00.005 Nvme9n1 : 0.89 144.05 9.00 72.03 0.00 244829.55 42719.76 248551.35 00:20:00.005 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:00.005 Job: Nvme10n1 ended in about 0.85 seconds with error 00:20:00.005 Verification LBA range: start 0x0 length 0x400 00:20:00.005 Nvme10n1 : 0.85 151.07 9.44 75.54 0.00 225196.69 9903.22 290494.39 00:20:00.005 =================================================================================================================== 00:20:00.005 Total : 1624.51 101.53 736.33 0.00 242510.43 5849.69 312242.63 00:20:00.005 [2024-07-15 22:43:43.353431] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:00.005 [2024-07-15 22:43:43.353519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:00.005 [2024-07-15 22:43:43.353916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.005 [2024-07-15 22:43:43.353956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa2450 with addr=10.0.0.2, port=4420 00:20:00.005 [2024-07-15 22:43:43.353977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa2450 is same with the state(5) to be set 00:20:00.005 [2024-07-15 22:43:43.354359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.005 [2024-07-15 22:43:43.354385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa76830 with addr=10.0.0.2, port=4420 00:20:00.005 [2024-07-15 22:43:43.354401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76830 is same with the state(5) to be set 00:20:00.005 [2024-07-15 22:43:43.354544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.005 [2024-07-15 22:43:43.354570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa7440 with addr=10.0.0.2, port=4420 00:20:00.005 [2024-07-15 22:43:43.354585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7440 is same with the state(5) to be set 00:20:00.005 [2024-07-15 22:43:43.354729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.005 [2024-07-15 22:43:43.354754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578610 with addr=10.0.0.2, port=4420 00:20:00.005 [2024-07-15 22:43:43.354781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x578610 is same with the state(5) to be set 00:20:00.005 [2024-07-15 22:43:43.354926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.005 [2024-07-15 22:43:43.354951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42240 with addr=10.0.0.2, port=4420 00:20:00.005 [2024-07-15 22:43:43.354965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42240 is same with the state(5) to be set 00:20:00.005 [2024-07-15 22:43:43.354982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:00.005 [2024-07-15 22:43:43.354995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:00.005 [2024-07-15 22:43:43.355012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:00.005 [2024-07-15 22:43:43.355037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:00.005 [2024-07-15 22:43:43.355051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:00.005 [2024-07-15 22:43:43.355064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:00.005 [2024-07-15 22:43:43.355081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:00.005 [2024-07-15 22:43:43.355094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:00.005 [2024-07-15 22:43:43.355107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:00.005 [2024-07-15 22:43:43.355281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.005 [2024-07-15 22:43:43.355305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.005 [2024-07-15 22:43:43.355317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.005 [2024-07-15 22:43:43.355477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.005 [2024-07-15 22:43:43.355505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3ab70 with addr=10.0.0.2, port=4420 00:20:00.005 [2024-07-15 22:43:43.355520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3ab70 is same with the state(5) to be set 00:20:00.005 [2024-07-15 22:43:43.355659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.005 [2024-07-15 22:43:43.355685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb470c0 with addr=10.0.0.2, port=4420 00:20:00.005 [2024-07-15 22:43:43.355699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb470c0 is same with the state(5) to be set 00:20:00.005 [2024-07-15 22:43:43.355727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa2450 (9): Bad file descriptor 00:20:00.005 [2024-07-15 22:43:43.355750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa76830 (9): Bad file descriptor 00:20:00.005 [2024-07-15 22:43:43.355767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7440 (9): Bad file descriptor 00:20:00.005 [2024-07-15 22:43:43.355784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x578610 (9): Bad file descriptor 00:20:00.005 [2024-07-15 22:43:43.355801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42240 (9): Bad file descriptor 00:20:00.005 [2024-07-15 22:43:43.355846] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:00.005 [2024-07-15 22:43:43.355867] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:00.005 [2024-07-15 22:43:43.355893] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:00.006 [2024-07-15 22:43:43.355917] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:00.006 [2024-07-15 22:43:43.355935] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:00.006 [2024-07-15 22:43:43.356567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3ab70 (9): Bad file descriptor 00:20:00.006 [2024-07-15 22:43:43.356596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb470c0 (9): Bad file descriptor 00:20:00.006 [2024-07-15 22:43:43.356612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:00.006 [2024-07-15 22:43:43.356625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:00.006 [2024-07-15 22:43:43.356637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:00.006 [2024-07-15 22:43:43.356656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:00.006 [2024-07-15 22:43:43.356669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:00.006 [2024-07-15 22:43:43.356681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.006 [2024-07-15 22:43:43.356698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:00.006 [2024-07-15 22:43:43.356711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:00.006 [2024-07-15 22:43:43.356724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:00.006 [2024-07-15 22:43:43.356740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:00.006 [2024-07-15 22:43:43.356753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:00.006 [2024-07-15 22:43:43.356766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:00.006 [2024-07-15 22:43:43.356781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:00.006 [2024-07-15 22:43:43.356794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:00.006 [2024-07-15 22:43:43.356806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:00.006 [2024-07-15 22:43:43.356870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:00.006 [2024-07-15 22:43:43.356908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:00.006 [2024-07-15 22:43:43.356930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:00.006 [2024-07-15 22:43:43.356946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.006 [2024-07-15 22:43:43.356958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.006 [2024-07-15 22:43:43.356969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.006 [2024-07-15 22:43:43.356980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.006 [2024-07-15 22:43:43.357012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:00.006 [2024-07-15 22:43:43.357027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:00.006 [2024-07-15 22:43:43.357040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:00.006 [2024-07-15 22:43:43.357056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:00.006 [2024-07-15 22:43:43.357075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:00.006 [2024-07-15 22:43:43.357088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:00.006 [2024-07-15 22:43:43.357132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.006 [2024-07-15 22:43:43.357165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.006 [2024-07-15 22:43:43.357182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.006 [2024-07-15 22:43:43.357323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.006 [2024-07-15 22:43:43.357349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3a990 with addr=10.0.0.2, port=4420 00:20:00.006 [2024-07-15 22:43:43.357364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3a990 is same with the state(5) to be set 00:20:00.006 [2024-07-15 22:43:43.357509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.006 [2024-07-15 22:43:43.357533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa99280 with addr=10.0.0.2, port=4420 00:20:00.006 [2024-07-15 22:43:43.357548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99280 is same with the state(5) to be set 00:20:00.006 [2024-07-15 22:43:43.357710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.006 [2024-07-15 22:43:43.357734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa98c60 with addr=10.0.0.2, port=4420 00:20:00.006 [2024-07-15 22:43:43.357748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98c60 is same with the state(5) to be set 00:20:00.006 [2024-07-15 22:43:43.357794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3a990 (9): Bad file descriptor 00:20:00.006 [2024-07-15 22:43:43.357818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa99280 (9): Bad file descriptor 00:20:00.006 [2024-07-15 22:43:43.357835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa98c60 (9): Bad file descriptor 00:20:00.006 [2024-07-15 22:43:43.357872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:00.006 [2024-07-15 22:43:43.357919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:00.006 [2024-07-15 22:43:43.357933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:00.006 [2024-07-15 22:43:43.357949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:00.006 [2024-07-15 22:43:43.357962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:00.006 [2024-07-15 22:43:43.357974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:00.006 [2024-07-15 22:43:43.357988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:00.006 [2024-07-15 22:43:43.358001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:00.006 [2024-07-15 22:43:43.358012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:00.006 [2024-07-15 22:43:43.358048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.006 [2024-07-15 22:43:43.358065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.006 [2024-07-15 22:43:43.358076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.571 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:00.571 22:43:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1302823 00:20:01.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1302823) - No such process 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:01.503 rmmod nvme_tcp 00:20:01.503 rmmod nvme_fabrics 00:20:01.503 rmmod nvme_keyring 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.503 22:43:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.037 22:43:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:04.037 00:20:04.037 real 0m8.395s 00:20:04.037 user 0m22.028s 00:20:04.037 sys 0m1.603s 00:20:04.037 22:43:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:04.038 22:43:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.038 ************************************ 00:20:04.038 END TEST nvmf_shutdown_tc3 00:20:04.038 ************************************ 00:20:04.038 22:43:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:20:04.038 22:43:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:04.038 00:20:04.038 real 0m29.415s 00:20:04.038 user 1m24.683s 00:20:04.038 sys 0m6.450s 00:20:04.038 22:43:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:04.038 22:43:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:04.038 ************************************ 00:20:04.038 END TEST nvmf_shutdown 00:20:04.038 ************************************ 00:20:04.038 22:43:47 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:04.038 22:43:47 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:04.038 22:43:47 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.038 22:43:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:04.038 22:43:47 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:04.038 22:43:47 nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:04.038 22:43:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:04.038 22:43:47 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:04.038 22:43:47 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:04.038 22:43:47 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:04.038 22:43:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:04.038 22:43:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:04.038 ************************************ 00:20:04.038 START TEST nvmf_multicontroller 00:20:04.038 ************************************ 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:04.038 * Looking for test storage... 00:20:04.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.038 22:43:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:05.991 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:05.991 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:05.991 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:05.991 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.991 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:05.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:20:05.992 00:20:05.992 --- 10.0.0.2 ping statistics --- 00:20:05.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.992 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:20:05.992 00:20:05.992 --- 10.0.0.1 ping statistics --- 00:20:05.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.992 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1305344 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1305344 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@823 -- # '[' -z 1305344 ']' 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:05.992 22:43:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.992 [2024-07-15 22:43:49.312796] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:20:05.992 [2024-07-15 22:43:49.312887] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.992 [2024-07-15 22:43:49.380964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:06.251 [2024-07-15 22:43:49.493737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.251 [2024-07-15 22:43:49.493788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.251 [2024-07-15 22:43:49.493803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.251 [2024-07-15 22:43:49.493816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.251 [2024-07-15 22:43:49.493825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.251 [2024-07-15 22:43:49.493907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.251 [2024-07-15 22:43:49.493959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.251 [2024-07-15 22:43:49.493962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.820 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:06.820 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # return 0 00:20:06.820 22:43:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:06.820 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.820 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 [2024-07-15 22:43:50.326942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 Malloc0 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 [2024-07-15 22:43:50.394438] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 [2024-07-15 22:43:50.402315] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 Malloc1 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1305501 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1305501 /var/tmp/bdevperf.sock 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@823 -- # '[' -z 1305501 ']' 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:07.079 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.338 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:07.338 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # return 0 00:20:07.338 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:07.338 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.338 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.597 NVMe0n1 00:20:07.597 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.597 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:07.597 22:43:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:07.597 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.597 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.597 22:43:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.597 1 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.597 request: 00:20:07.597 { 00:20:07.597 "name": "NVMe0", 00:20:07.597 "trtype": "tcp", 00:20:07.597 "traddr": "10.0.0.2", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "4420", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.597 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:07.597 "hostaddr": "10.0.0.2", 00:20:07.597 "hostsvcid": "60000", 00:20:07.597 "prchk_reftag": false, 00:20:07.597 "prchk_guard": false, 00:20:07.597 "hdgst": false, 00:20:07.597 "ddgst": false, 00:20:07.597 "method": "bdev_nvme_attach_controller", 00:20:07.597 "req_id": 1 00:20:07.597 } 00:20:07.597 Got JSON-RPC error response 00:20:07.597 response: 00:20:07.597 { 00:20:07.597 "code": -114, 00:20:07.597 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:07.597 } 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.597 request: 00:20:07.597 { 00:20:07.597 "name": "NVMe0", 00:20:07.597 "trtype": "tcp", 00:20:07.597 "traddr": "10.0.0.2", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "4420", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:07.597 "hostaddr": "10.0.0.2", 00:20:07.597 "hostsvcid": "60000", 00:20:07.597 "prchk_reftag": false, 00:20:07.597 "prchk_guard": false, 00:20:07.597 "hdgst": false, 00:20:07.597 "ddgst": false, 00:20:07.597 "method": "bdev_nvme_attach_controller", 00:20:07.597 "req_id": 1 00:20:07.597 } 00:20:07.597 Got JSON-RPC error response 00:20:07.597 response: 00:20:07.597 { 00:20:07.597 "code": -114, 00:20:07.597 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:07.597 } 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.597 request: 00:20:07.597 { 00:20:07.597 "name": "NVMe0", 00:20:07.597 "trtype": "tcp", 00:20:07.597 "traddr": "10.0.0.2", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "4420", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.597 "hostaddr": "10.0.0.2", 00:20:07.597 "hostsvcid": "60000", 00:20:07.597 "prchk_reftag": false, 00:20:07.597 "prchk_guard": false, 00:20:07.597 "hdgst": false, 00:20:07.597 "ddgst": false, 00:20:07.597 "multipath": "disable", 00:20:07.597 "method": "bdev_nvme_attach_controller", 00:20:07.597 "req_id": 1 00:20:07.597 } 00:20:07.597 Got JSON-RPC error response 00:20:07.597 response: 00:20:07.597 { 00:20:07.597 "code": -114, 00:20:07.597 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:07.597 } 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.597 request: 00:20:07.597 { 00:20:07.597 "name": "NVMe0", 00:20:07.597 "trtype": "tcp", 00:20:07.597 "traddr": "10.0.0.2", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "4420", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.597 "hostaddr": "10.0.0.2", 00:20:07.597 "hostsvcid": "60000", 00:20:07.597 "prchk_reftag": false, 00:20:07.597 "prchk_guard": false, 00:20:07.597 "hdgst": false, 00:20:07.597 "ddgst": false, 00:20:07.597 "multipath": "failover", 00:20:07.597 "method": "bdev_nvme_attach_controller", 00:20:07.597 "req_id": 1 00:20:07.597 } 00:20:07.597 Got JSON-RPC error response 00:20:07.597 response: 00:20:07.597 { 00:20:07.597 "code": -114, 00:20:07.597 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:07.597 } 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:20:07.597 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:20:07.598 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:07.598 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:07.598 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:07.598 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:07.598 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.598 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.855 00:20:07.855 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.855 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:07.855 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.856 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:07.856 22:43:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.230 0 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1305501 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@942 -- # '[' -z 1305501 ']' 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # kill -0 1305501 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # uname 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1305501 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1305501' 00:20:09.230 killing process with pid 1305501 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@961 -- # kill 1305501 00:20:09.230 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # wait 1305501 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:09.488 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1606 -- # read -r file 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # sort -u 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # cat 00:20:09.489 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:09.489 [2024-07-15 22:43:50.508710] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:20:09.489 [2024-07-15 22:43:50.508793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305501 ] 00:20:09.489 [2024-07-15 22:43:50.567579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.489 [2024-07-15 22:43:50.676792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.489 [2024-07-15 22:43:51.284467] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 4ea1d750-5ddc-4241-a8e6-b0e8c026c16f already exists 00:20:09.489 [2024-07-15 22:43:51.284503] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:4ea1d750-5ddc-4241-a8e6-b0e8c026c16f alias for bdev NVMe1n1 00:20:09.489 [2024-07-15 22:43:51.284532] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:09.489 Running I/O for 1 seconds... 00:20:09.489 00:20:09.489 Latency(us) 00:20:09.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.489 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:09.489 NVMe0n1 : 1.01 17382.49 67.90 0.00 0.00 7331.90 6941.96 14951.92 00:20:09.489 =================================================================================================================== 00:20:09.489 Total : 17382.49 67.90 0.00 0.00 7331.90 6941.96 14951.92 00:20:09.489 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.489 00:20:09.489 Latency(us) 00:20:09.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.489 =================================================================================================================== 00:20:09.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.489 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1606 -- # read -r file 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.489 rmmod nvme_tcp 00:20:09.489 rmmod nvme_fabrics 00:20:09.489 rmmod nvme_keyring 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1305344 ']' 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1305344 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@942 -- # '[' -z 1305344 ']' 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # kill -0 1305344 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # uname 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1305344 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1305344' 00:20:09.489 killing process with pid 1305344 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@961 -- # kill 1305344 00:20:09.489 22:43:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # wait 1305344 00:20:09.747 22:43:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:09.747 22:43:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:09.747 22:43:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:09.747 22:43:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.747 22:43:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.747 22:43:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.747 22:43:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.747 22:43:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.278 22:43:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.278 00:20:12.278 real 0m8.112s 00:20:12.278 user 0m13.516s 00:20:12.278 sys 0m2.568s 00:20:12.278 22:43:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:12.278 22:43:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.278 ************************************ 00:20:12.278 END TEST nvmf_multicontroller 00:20:12.278 ************************************ 00:20:12.278 22:43:55 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:12.278 22:43:55 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:12.278 22:43:55 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:12.278 22:43:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:12.278 22:43:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:12.278 ************************************ 00:20:12.278 START TEST nvmf_aer 00:20:12.278 ************************************ 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:12.278 * Looking for test storage... 00:20:12.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.278 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.279 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.279 22:43:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.279 22:43:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:14.180 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:14.180 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:14.180 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:14.180 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:20:14.180 00:20:14.180 --- 10.0.0.2 ping statistics --- 00:20:14.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.180 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:20:14.180 00:20:14.180 --- 10.0.0.1 ping statistics --- 00:20:14.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.180 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.180 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1307715 00:20:14.181 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:14.181 22:43:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1307715 00:20:14.181 22:43:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@823 -- # '[' -z 1307715 ']' 00:20:14.181 22:43:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.181 22:43:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:14.181 22:43:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.181 22:43:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:14.181 22:43:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.181 [2024-07-15 22:43:57.508850] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:20:14.181 [2024-07-15 22:43:57.508961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.181 [2024-07-15 22:43:57.578636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.438 [2024-07-15 22:43:57.699193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.439 [2024-07-15 22:43:57.699241] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.439 [2024-07-15 22:43:57.699255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.439 [2024-07-15 22:43:57.699266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.439 [2024-07-15 22:43:57.699275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.439 [2024-07-15 22:43:57.699366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.439 [2024-07-15 22:43:57.699486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.439 [2024-07-15 22:43:57.699532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.439 [2024-07-15 22:43:57.699535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.002 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:15.002 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # return 0 00:20:15.002 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.002 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:15.002 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.260 [2024-07-15 22:43:58.510854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.260 Malloc0 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.260 [2024-07-15 22:43:58.561936] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.260 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.260 [ 00:20:15.260 { 00:20:15.260 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:15.260 "subtype": "Discovery", 00:20:15.260 "listen_addresses": [], 00:20:15.260 "allow_any_host": true, 00:20:15.260 "hosts": [] 00:20:15.260 }, 00:20:15.260 { 00:20:15.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.260 "subtype": "NVMe", 00:20:15.260 "listen_addresses": [ 00:20:15.260 { 00:20:15.260 "trtype": "TCP", 00:20:15.260 "adrfam": "IPv4", 00:20:15.261 "traddr": "10.0.0.2", 00:20:15.261 "trsvcid": "4420" 00:20:15.261 } 00:20:15.261 ], 00:20:15.261 "allow_any_host": true, 00:20:15.261 "hosts": [], 00:20:15.261 "serial_number": "SPDK00000000000001", 00:20:15.261 "model_number": "SPDK bdev Controller", 00:20:15.261 "max_namespaces": 2, 00:20:15.261 "min_cntlid": 1, 00:20:15.261 "max_cntlid": 65519, 00:20:15.261 "namespaces": [ 00:20:15.261 { 00:20:15.261 "nsid": 1, 00:20:15.261 "bdev_name": "Malloc0", 00:20:15.261 "name": "Malloc0", 00:20:15.261 "nguid": "4216F2A4F67A44CC95C72C036F3F9B98", 00:20:15.261 "uuid": "4216f2a4-f67a-44cc-95c7-2c036f3f9b98" 00:20:15.261 } 00:20:15.261 ] 00:20:15.261 } 00:20:15.261 ] 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1307870 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1259 -- # local i=0 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 0 -lt 200 ']' 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # i=1 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 1 -lt 200 ']' 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # i=2 00:20:15.261 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1270 -- # return 0 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.519 Malloc1 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.519 [ 00:20:15.519 { 00:20:15.519 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:15.519 "subtype": "Discovery", 00:20:15.519 "listen_addresses": [], 00:20:15.519 "allow_any_host": true, 00:20:15.519 "hosts": [] 00:20:15.519 }, 00:20:15.519 { 00:20:15.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.519 "subtype": "NVMe", 00:20:15.519 "listen_addresses": [ 00:20:15.519 { 00:20:15.519 "trtype": "TCP", 00:20:15.519 "adrfam": "IPv4", 00:20:15.519 "traddr": "10.0.0.2", 00:20:15.519 "trsvcid": "4420" 00:20:15.519 } 00:20:15.519 ], 00:20:15.519 "allow_any_host": true, 00:20:15.519 "hosts": [], 00:20:15.519 "serial_number": "SPDK00000000000001", 00:20:15.519 "model_number": "SPDK bdev Controller", 00:20:15.519 "max_namespaces": 2, 00:20:15.519 "min_cntlid": 1, 00:20:15.519 "max_cntlid": 65519, 00:20:15.519 "namespaces": [ 00:20:15.519 { 00:20:15.519 "nsid": 1, 00:20:15.519 "bdev_name": "Malloc0", 00:20:15.519 "name": "Malloc0", 00:20:15.519 "nguid": "4216F2A4F67A44CC95C72C036F3F9B98", 00:20:15.519 "uuid": "4216f2a4-f67a-44cc-95c7-2c036f3f9b98" 00:20:15.519 }, 00:20:15.519 { 00:20:15.519 "nsid": 2, 00:20:15.519 "bdev_name": "Malloc1", 00:20:15.519 "name": "Malloc1", 00:20:15.519 "nguid": "4A21B22AB22341EA90CD8A2E52C545F2", 00:20:15.519 "uuid": "4a21b22a-b223-41ea-90cd-8a2e52c545f2" 00:20:15.519 } 00:20:15.519 ] 00:20:15.519 } 00:20:15.519 ] 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1307870 00:20:15.519 Asynchronous Event Request test 00:20:15.519 Attaching to 10.0.0.2 00:20:15.519 Attached to 10.0.0.2 00:20:15.519 Registering asynchronous event callbacks... 00:20:15.519 Starting namespace attribute notice tests for all controllers... 00:20:15.519 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:15.519 aer_cb - Changed Namespace 00:20:15.519 Cleaning up... 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.519 rmmod nvme_tcp 00:20:15.519 rmmod nvme_fabrics 00:20:15.519 rmmod nvme_keyring 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1307715 ']' 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1307715 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@942 -- # '[' -z 1307715 ']' 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # kill -0 1307715 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # uname 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1307715 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1307715' 00:20:15.519 killing process with pid 1307715 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@961 -- # kill 1307715 00:20:15.519 22:43:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # wait 1307715 00:20:15.778 22:43:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.778 22:43:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.778 22:43:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.778 22:43:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.778 22:43:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.778 22:43:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.778 22:43:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.778 22:43:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.309 22:44:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:18.309 00:20:18.309 real 0m6.049s 00:20:18.309 user 0m7.005s 00:20:18.309 sys 0m1.893s 00:20:18.309 22:44:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:18.309 22:44:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:18.309 ************************************ 00:20:18.309 END TEST nvmf_aer 00:20:18.310 ************************************ 00:20:18.310 22:44:01 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:18.310 22:44:01 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:18.310 22:44:01 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:18.310 22:44:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:18.310 22:44:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.310 ************************************ 00:20:18.310 START TEST nvmf_async_init 00:20:18.310 ************************************ 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:18.310 * Looking for test storage... 00:20:18.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8f82f1bff1ec413a9c7467ad53118577 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:18.310 22:44:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:20.213 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:20.213 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:20.213 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.213 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:20.214 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:20.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:20:20.214 00:20:20.214 --- 10.0.0.2 ping statistics --- 00:20:20.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.214 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:20:20.214 00:20:20.214 --- 10.0.0.1 ping statistics --- 00:20:20.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.214 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1309922 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1309922 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@823 -- # '[' -z 1309922 ']' 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:20.214 22:44:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.214 [2024-07-15 22:44:03.657386] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:20:20.214 [2024-07-15 22:44:03.657472] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.472 [2024-07-15 22:44:03.724324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.472 [2024-07-15 22:44:03.839331] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.472 [2024-07-15 22:44:03.839398] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.472 [2024-07-15 22:44:03.839414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.472 [2024-07-15 22:44:03.839427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.472 [2024-07-15 22:44:03.839438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.472 [2024-07-15 22:44:03.839468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # return 0 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.405 [2024-07-15 22:44:04.663126] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.405 null0 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8f82f1bff1ec413a9c7467ad53118577 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.405 [2024-07-15 22:44:04.703382] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.405 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.664 nvme0n1 00:20:21.664 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.664 22:44:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:21.664 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.664 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.664 [ 00:20:21.664 { 00:20:21.664 "name": "nvme0n1", 00:20:21.664 "aliases": [ 00:20:21.664 "8f82f1bf-f1ec-413a-9c74-67ad53118577" 00:20:21.664 ], 00:20:21.664 "product_name": "NVMe disk", 00:20:21.664 "block_size": 512, 00:20:21.664 "num_blocks": 2097152, 00:20:21.664 "uuid": "8f82f1bf-f1ec-413a-9c74-67ad53118577", 00:20:21.664 "assigned_rate_limits": { 00:20:21.664 "rw_ios_per_sec": 0, 00:20:21.664 "rw_mbytes_per_sec": 0, 00:20:21.664 "r_mbytes_per_sec": 0, 00:20:21.664 "w_mbytes_per_sec": 0 00:20:21.664 }, 00:20:21.664 "claimed": false, 00:20:21.664 "zoned": false, 00:20:21.664 "supported_io_types": { 00:20:21.664 "read": true, 00:20:21.664 "write": true, 00:20:21.664 "unmap": false, 00:20:21.664 "flush": true, 00:20:21.664 "reset": true, 00:20:21.664 "nvme_admin": true, 00:20:21.664 "nvme_io": true, 00:20:21.664 "nvme_io_md": false, 00:20:21.664 "write_zeroes": true, 00:20:21.664 "zcopy": false, 00:20:21.664 "get_zone_info": false, 00:20:21.664 "zone_management": false, 00:20:21.664 "zone_append": false, 00:20:21.664 "compare": true, 00:20:21.664 "compare_and_write": true, 00:20:21.664 "abort": true, 00:20:21.664 "seek_hole": false, 00:20:21.664 "seek_data": false, 00:20:21.664 "copy": true, 00:20:21.664 "nvme_iov_md": false 00:20:21.664 }, 00:20:21.664 "memory_domains": [ 00:20:21.664 { 00:20:21.664 "dma_device_id": "system", 00:20:21.664 "dma_device_type": 1 00:20:21.664 } 00:20:21.664 ], 00:20:21.664 "driver_specific": { 00:20:21.664 "nvme": [ 00:20:21.664 { 00:20:21.664 "trid": { 00:20:21.664 "trtype": "TCP", 00:20:21.664 "adrfam": "IPv4", 00:20:21.664 "traddr": "10.0.0.2", 00:20:21.664 "trsvcid": "4420", 00:20:21.664 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:21.664 }, 00:20:21.664 "ctrlr_data": { 00:20:21.664 "cntlid": 1, 00:20:21.664 "vendor_id": "0x8086", 00:20:21.664 "model_number": "SPDK bdev Controller", 00:20:21.665 "serial_number": "00000000000000000000", 00:20:21.665 "firmware_revision": "24.09", 00:20:21.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:21.665 "oacs": { 00:20:21.665 "security": 0, 00:20:21.665 "format": 0, 00:20:21.665 "firmware": 0, 00:20:21.665 "ns_manage": 0 00:20:21.665 }, 00:20:21.665 "multi_ctrlr": true, 00:20:21.665 "ana_reporting": false 00:20:21.665 }, 00:20:21.665 "vs": { 00:20:21.665 "nvme_version": "1.3" 00:20:21.665 }, 00:20:21.665 "ns_data": { 00:20:21.665 "id": 1, 00:20:21.665 "can_share": true 00:20:21.665 } 00:20:21.665 } 00:20:21.665 ], 00:20:21.665 "mp_policy": "active_passive" 00:20:21.665 } 00:20:21.665 } 00:20:21.665 ] 00:20:21.665 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.665 22:44:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:21.665 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.665 22:44:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 [2024-07-15 22:44:04.956532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:21.665 [2024-07-15 22:44:04.956620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91c090 (9): Bad file descriptor 00:20:21.665 [2024-07-15 22:44:05.099038] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 [ 00:20:21.665 { 00:20:21.665 "name": "nvme0n1", 00:20:21.665 "aliases": [ 00:20:21.665 "8f82f1bf-f1ec-413a-9c74-67ad53118577" 00:20:21.665 ], 00:20:21.665 "product_name": "NVMe disk", 00:20:21.665 "block_size": 512, 00:20:21.665 "num_blocks": 2097152, 00:20:21.665 "uuid": "8f82f1bf-f1ec-413a-9c74-67ad53118577", 00:20:21.665 "assigned_rate_limits": { 00:20:21.665 "rw_ios_per_sec": 0, 00:20:21.665 "rw_mbytes_per_sec": 0, 00:20:21.665 "r_mbytes_per_sec": 0, 00:20:21.665 "w_mbytes_per_sec": 0 00:20:21.665 }, 00:20:21.665 "claimed": false, 00:20:21.665 "zoned": false, 00:20:21.665 "supported_io_types": { 00:20:21.665 "read": true, 00:20:21.665 "write": true, 00:20:21.665 "unmap": false, 00:20:21.665 "flush": true, 00:20:21.665 "reset": true, 00:20:21.665 "nvme_admin": true, 00:20:21.665 "nvme_io": true, 00:20:21.665 "nvme_io_md": false, 00:20:21.665 "write_zeroes": true, 00:20:21.665 "zcopy": false, 00:20:21.665 "get_zone_info": false, 00:20:21.665 "zone_management": false, 00:20:21.665 "zone_append": false, 00:20:21.665 "compare": true, 00:20:21.665 "compare_and_write": true, 00:20:21.665 "abort": true, 00:20:21.665 "seek_hole": false, 00:20:21.665 "seek_data": false, 00:20:21.665 "copy": true, 00:20:21.665 "nvme_iov_md": false 00:20:21.665 }, 00:20:21.665 "memory_domains": [ 00:20:21.665 { 00:20:21.665 "dma_device_id": "system", 00:20:21.665 "dma_device_type": 1 00:20:21.665 } 00:20:21.665 ], 00:20:21.665 "driver_specific": { 00:20:21.665 "nvme": [ 00:20:21.665 { 00:20:21.665 "trid": { 00:20:21.665 "trtype": "TCP", 00:20:21.665 "adrfam": "IPv4", 00:20:21.665 "traddr": "10.0.0.2", 00:20:21.665 "trsvcid": "4420", 00:20:21.665 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:21.665 }, 00:20:21.665 "ctrlr_data": { 00:20:21.665 "cntlid": 2, 00:20:21.665 "vendor_id": "0x8086", 00:20:21.665 "model_number": "SPDK bdev Controller", 00:20:21.665 "serial_number": "00000000000000000000", 00:20:21.665 "firmware_revision": "24.09", 00:20:21.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:21.665 "oacs": { 00:20:21.665 "security": 0, 00:20:21.665 "format": 0, 00:20:21.665 "firmware": 0, 00:20:21.665 "ns_manage": 0 00:20:21.665 }, 00:20:21.665 "multi_ctrlr": true, 00:20:21.665 "ana_reporting": false 00:20:21.665 }, 00:20:21.665 "vs": { 00:20:21.665 "nvme_version": "1.3" 00:20:21.665 }, 00:20:21.665 "ns_data": { 00:20:21.665 "id": 1, 00:20:21.665 "can_share": true 00:20:21.665 } 00:20:21.665 } 00:20:21.665 ], 00:20:21.665 "mp_policy": "active_passive" 00:20:21.665 } 00:20:21.665 } 00:20:21.665 ] 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.WPnFaWSmAR 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.WPnFaWSmAR 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 [2024-07-15 22:44:05.149303] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.665 [2024-07-15 22:44:05.149426] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WPnFaWSmAR 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 [2024-07-15 22:44:05.157324] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WPnFaWSmAR 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.665 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.924 [2024-07-15 22:44:05.165355] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.924 [2024-07-15 22:44:05.165415] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:21.924 nvme0n1 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.924 [ 00:20:21.924 { 00:20:21.924 "name": "nvme0n1", 00:20:21.924 "aliases": [ 00:20:21.924 "8f82f1bf-f1ec-413a-9c74-67ad53118577" 00:20:21.924 ], 00:20:21.924 "product_name": "NVMe disk", 00:20:21.924 "block_size": 512, 00:20:21.924 "num_blocks": 2097152, 00:20:21.924 "uuid": "8f82f1bf-f1ec-413a-9c74-67ad53118577", 00:20:21.924 "assigned_rate_limits": { 00:20:21.924 "rw_ios_per_sec": 0, 00:20:21.924 "rw_mbytes_per_sec": 0, 00:20:21.924 "r_mbytes_per_sec": 0, 00:20:21.924 "w_mbytes_per_sec": 0 00:20:21.924 }, 00:20:21.924 "claimed": false, 00:20:21.924 "zoned": false, 00:20:21.924 "supported_io_types": { 00:20:21.924 "read": true, 00:20:21.924 "write": true, 00:20:21.924 "unmap": false, 00:20:21.924 "flush": true, 00:20:21.924 "reset": true, 00:20:21.924 "nvme_admin": true, 00:20:21.924 "nvme_io": true, 00:20:21.924 "nvme_io_md": false, 00:20:21.924 "write_zeroes": true, 00:20:21.924 "zcopy": false, 00:20:21.924 "get_zone_info": false, 00:20:21.924 "zone_management": false, 00:20:21.924 "zone_append": false, 00:20:21.924 "compare": true, 00:20:21.924 "compare_and_write": true, 00:20:21.924 "abort": true, 00:20:21.924 "seek_hole": false, 00:20:21.924 "seek_data": false, 00:20:21.924 "copy": true, 00:20:21.924 "nvme_iov_md": false 00:20:21.924 }, 00:20:21.924 "memory_domains": [ 00:20:21.924 { 00:20:21.924 "dma_device_id": "system", 00:20:21.924 "dma_device_type": 1 00:20:21.924 } 00:20:21.924 ], 00:20:21.924 "driver_specific": { 00:20:21.924 "nvme": [ 00:20:21.924 { 00:20:21.924 "trid": { 00:20:21.924 "trtype": "TCP", 00:20:21.924 "adrfam": "IPv4", 00:20:21.924 "traddr": "10.0.0.2", 00:20:21.924 "trsvcid": "4421", 00:20:21.924 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:21.924 }, 00:20:21.924 "ctrlr_data": { 00:20:21.924 "cntlid": 3, 00:20:21.924 "vendor_id": "0x8086", 00:20:21.924 "model_number": "SPDK bdev Controller", 00:20:21.924 "serial_number": "00000000000000000000", 00:20:21.924 "firmware_revision": "24.09", 00:20:21.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:21.924 "oacs": { 00:20:21.924 "security": 0, 00:20:21.924 "format": 0, 00:20:21.924 "firmware": 0, 00:20:21.924 "ns_manage": 0 00:20:21.924 }, 00:20:21.924 "multi_ctrlr": true, 00:20:21.924 "ana_reporting": false 00:20:21.924 }, 00:20:21.924 "vs": { 00:20:21.924 "nvme_version": "1.3" 00:20:21.924 }, 00:20:21.924 "ns_data": { 00:20:21.924 "id": 1, 00:20:21.924 "can_share": true 00:20:21.924 } 00:20:21.924 } 00:20:21.924 ], 00:20:21.924 "mp_policy": "active_passive" 00:20:21.924 } 00:20:21.924 } 00:20:21.924 ] 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.WPnFaWSmAR 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.924 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.924 rmmod nvme_tcp 00:20:21.924 rmmod nvme_fabrics 00:20:21.924 rmmod nvme_keyring 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1309922 ']' 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1309922 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@942 -- # '[' -z 1309922 ']' 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # kill -0 1309922 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # uname 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1309922 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1309922' 00:20:21.925 killing process with pid 1309922 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@961 -- # kill 1309922 00:20:21.925 [2024-07-15 22:44:05.370933] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:21.925 [2024-07-15 22:44:05.370972] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:21.925 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # wait 1309922 00:20:22.214 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.214 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.214 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.214 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.214 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.214 22:44:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.214 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.214 22:44:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.744 22:44:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.744 00:20:24.744 real 0m6.297s 00:20:24.744 user 0m3.046s 00:20:24.744 sys 0m1.902s 00:20:24.744 22:44:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:24.744 22:44:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.744 ************************************ 00:20:24.744 END TEST nvmf_async_init 00:20:24.744 ************************************ 00:20:24.744 22:44:07 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:24.744 22:44:07 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:24.744 22:44:07 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:24.744 22:44:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:24.744 22:44:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:24.744 ************************************ 00:20:24.744 START TEST dma 00:20:24.744 ************************************ 00:20:24.744 22:44:07 nvmf_tcp.dma -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:24.744 * Looking for test storage... 00:20:24.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:24.744 22:44:07 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.744 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.744 22:44:07 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.744 22:44:07 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.744 22:44:07 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.744 22:44:07 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.745 22:44:07 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.745 22:44:07 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.745 22:44:07 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:24.745 22:44:07 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.745 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:24.745 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.745 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.745 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.745 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.745 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.745 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.745 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.745 22:44:07 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.745 22:44:07 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:24.745 22:44:07 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:24.745 00:20:24.745 real 0m0.065s 00:20:24.745 user 0m0.033s 00:20:24.745 sys 0m0.038s 00:20:24.745 22:44:07 nvmf_tcp.dma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:24.745 22:44:07 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:24.745 ************************************ 00:20:24.745 END TEST dma 00:20:24.745 ************************************ 00:20:24.745 22:44:07 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:24.745 22:44:07 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:24.745 22:44:07 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:24.745 22:44:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:24.745 22:44:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:24.745 ************************************ 00:20:24.745 START TEST nvmf_identify 00:20:24.745 ************************************ 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:24.745 * Looking for test storage... 00:20:24.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.745 22:44:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.642 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.642 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.642 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.642 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.642 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.642 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.642 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.642 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.642 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.642 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:26.643 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:26.643 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:26.643 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:26.643 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:20:26.643 00:20:26.643 --- 10.0.0.2 ping statistics --- 00:20:26.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.643 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:20:26.643 00:20:26.643 --- 10.0.0.1 ping statistics --- 00:20:26.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.643 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.643 22:44:09 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1312056 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1312056 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@823 -- # '[' -z 1312056 ']' 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:26.643 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.643 [2024-07-15 22:44:10.050044] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:20:26.643 [2024-07-15 22:44:10.050130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.643 [2024-07-15 22:44:10.117071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.901 [2024-07-15 22:44:10.228127] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.901 [2024-07-15 22:44:10.228190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.901 [2024-07-15 22:44:10.228217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.901 [2024-07-15 22:44:10.228228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.901 [2024-07-15 22:44:10.228237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.901 [2024-07-15 22:44:10.228326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.901 [2024-07-15 22:44:10.228374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.901 [2024-07-15 22:44:10.228457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.901 [2024-07-15 22:44:10.228459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.901 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:26.901 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # return 0 00:20:26.901 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.901 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:26.901 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.901 [2024-07-15 22:44:10.357656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.902 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:26.902 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:26.902 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.902 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.902 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:26.902 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:26.902 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:27.162 Malloc0 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:27.162 [2024-07-15 22:44:10.431215] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:27.162 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:27.162 [ 00:20:27.162 { 00:20:27.162 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:27.162 "subtype": "Discovery", 00:20:27.162 "listen_addresses": [ 00:20:27.162 { 00:20:27.162 "trtype": "TCP", 00:20:27.162 "adrfam": "IPv4", 00:20:27.162 "traddr": "10.0.0.2", 00:20:27.162 "trsvcid": "4420" 00:20:27.162 } 00:20:27.162 ], 00:20:27.162 "allow_any_host": true, 00:20:27.162 "hosts": [] 00:20:27.162 }, 00:20:27.162 { 00:20:27.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.162 "subtype": "NVMe", 00:20:27.162 "listen_addresses": [ 00:20:27.162 { 00:20:27.162 "trtype": "TCP", 00:20:27.162 "adrfam": "IPv4", 00:20:27.162 "traddr": "10.0.0.2", 00:20:27.162 "trsvcid": "4420" 00:20:27.162 } 00:20:27.163 ], 00:20:27.163 "allow_any_host": true, 00:20:27.163 "hosts": [], 00:20:27.163 "serial_number": "SPDK00000000000001", 00:20:27.163 "model_number": "SPDK bdev Controller", 00:20:27.163 "max_namespaces": 32, 00:20:27.163 "min_cntlid": 1, 00:20:27.163 "max_cntlid": 65519, 00:20:27.163 "namespaces": [ 00:20:27.163 { 00:20:27.163 "nsid": 1, 00:20:27.163 "bdev_name": "Malloc0", 00:20:27.163 "name": "Malloc0", 00:20:27.163 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:27.163 "eui64": "ABCDEF0123456789", 00:20:27.163 "uuid": "857998e5-a485-4bcf-a9ac-8653af12c85e" 00:20:27.163 } 00:20:27.163 ] 00:20:27.163 } 00:20:27.163 ] 00:20:27.163 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:27.163 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:27.163 [2024-07-15 22:44:10.473679] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:20:27.163 [2024-07-15 22:44:10.473730] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312198 ] 00:20:27.163 [2024-07-15 22:44:10.509310] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:27.163 [2024-07-15 22:44:10.509378] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:27.163 [2024-07-15 22:44:10.509389] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:27.163 [2024-07-15 22:44:10.509406] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:27.163 [2024-07-15 22:44:10.509417] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:27.163 [2024-07-15 22:44:10.509770] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:27.163 [2024-07-15 22:44:10.509825] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8e6540 0 00:20:27.163 [2024-07-15 22:44:10.515906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:27.163 [2024-07-15 22:44:10.515930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:27.163 [2024-07-15 22:44:10.515939] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:27.163 [2024-07-15 22:44:10.515945] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:27.163 [2024-07-15 22:44:10.516005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.516020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.516029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e6540) 00:20:27.163 [2024-07-15 22:44:10.516047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:27.163 [2024-07-15 22:44:10.516074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9463c0, cid 0, qid 0 00:20:27.163 [2024-07-15 22:44:10.523908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.163 [2024-07-15 22:44:10.523926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.163 [2024-07-15 22:44:10.523933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.523941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9463c0) on tqpair=0x8e6540 00:20:27.163 [2024-07-15 22:44:10.523958] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:27.163 [2024-07-15 22:44:10.523970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:27.163 [2024-07-15 22:44:10.523981] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:27.163 [2024-07-15 22:44:10.524005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.524014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.524020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e6540) 00:20:27.163 [2024-07-15 22:44:10.524031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.163 [2024-07-15 22:44:10.524055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9463c0, cid 0, qid 0 00:20:27.163 [2024-07-15 22:44:10.524261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.163 [2024-07-15 22:44:10.524277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.163 [2024-07-15 22:44:10.524285] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.524291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9463c0) on tqpair=0x8e6540 00:20:27.163 [2024-07-15 22:44:10.524301] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:27.163 [2024-07-15 22:44:10.524321] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:27.163 [2024-07-15 22:44:10.524335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.524342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.524349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e6540) 00:20:27.163 [2024-07-15 22:44:10.524360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.163 [2024-07-15 22:44:10.524396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9463c0, cid 0, qid 0 00:20:27.163 [2024-07-15 22:44:10.524640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.163 [2024-07-15 22:44:10.524656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.163 [2024-07-15 22:44:10.524663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.524670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9463c0) on tqpair=0x8e6540 00:20:27.163 [2024-07-15 22:44:10.524679] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:27.163 [2024-07-15 22:44:10.524694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:27.163 [2024-07-15 22:44:10.524706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.524714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.524721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e6540) 00:20:27.163 [2024-07-15 22:44:10.524731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.163 [2024-07-15 22:44:10.524768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9463c0, cid 0, qid 0 00:20:27.163 [2024-07-15 22:44:10.525016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.163 [2024-07-15 22:44:10.525032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.163 [2024-07-15 22:44:10.525040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.525046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9463c0) on tqpair=0x8e6540 00:20:27.163 [2024-07-15 22:44:10.525056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:27.163 [2024-07-15 22:44:10.525073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.525083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.525089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e6540) 00:20:27.163 [2024-07-15 22:44:10.525100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.163 [2024-07-15 22:44:10.525121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9463c0, cid 0, qid 0 00:20:27.163 [2024-07-15 22:44:10.525290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.163 [2024-07-15 22:44:10.525305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.163 [2024-07-15 22:44:10.525312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.525319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9463c0) on tqpair=0x8e6540 00:20:27.163 [2024-07-15 22:44:10.525328] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:27.163 [2024-07-15 22:44:10.525337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:27.163 [2024-07-15 22:44:10.525354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:27.163 [2024-07-15 22:44:10.525467] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:27.163 [2024-07-15 22:44:10.525476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:27.163 [2024-07-15 22:44:10.525507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.525514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.525521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e6540) 00:20:27.163 [2024-07-15 22:44:10.525531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.163 [2024-07-15 22:44:10.525551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9463c0, cid 0, qid 0 00:20:27.163 [2024-07-15 22:44:10.525748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.163 [2024-07-15 22:44:10.525764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.163 [2024-07-15 22:44:10.525771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.525778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9463c0) on tqpair=0x8e6540 00:20:27.163 [2024-07-15 22:44:10.525787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:27.163 [2024-07-15 22:44:10.525803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.525813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.525819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e6540) 00:20:27.163 [2024-07-15 22:44:10.525829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.163 [2024-07-15 22:44:10.525850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9463c0, cid 0, qid 0 00:20:27.163 [2024-07-15 22:44:10.526028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.163 [2024-07-15 22:44:10.526042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.163 [2024-07-15 22:44:10.526049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.163 [2024-07-15 22:44:10.526056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9463c0) on tqpair=0x8e6540 00:20:27.163 [2024-07-15 22:44:10.526064] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:27.163 [2024-07-15 22:44:10.526073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:27.163 [2024-07-15 22:44:10.526087] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:27.164 [2024-07-15 22:44:10.526102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:27.164 [2024-07-15 22:44:10.526120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.526128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e6540) 00:20:27.164 [2024-07-15 22:44:10.526139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.164 [2024-07-15 22:44:10.526161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9463c0, cid 0, qid 0 00:20:27.164 [2024-07-15 22:44:10.526413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.164 [2024-07-15 22:44:10.526429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.164 [2024-07-15 22:44:10.526440] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.526448] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e6540): datao=0, datal=4096, cccid=0 00:20:27.164 [2024-07-15 22:44:10.526456] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9463c0) on tqpair(0x8e6540): expected_datao=0, payload_size=4096 00:20:27.164 [2024-07-15 22:44:10.526464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.526498] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.526510] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.570892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.164 [2024-07-15 22:44:10.570911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.164 [2024-07-15 22:44:10.570919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.570925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9463c0) on tqpair=0x8e6540 00:20:27.164 [2024-07-15 22:44:10.570939] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:27.164 [2024-07-15 22:44:10.570953] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:27.164 [2024-07-15 22:44:10.570962] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:27.164 [2024-07-15 22:44:10.570971] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:27.164 [2024-07-15 22:44:10.570979] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:27.164 [2024-07-15 22:44:10.570987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:27.164 [2024-07-15 22:44:10.571003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:27.164 [2024-07-15 22:44:10.571016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e6540) 00:20:27.164 [2024-07-15 22:44:10.571042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.164 [2024-07-15 22:44:10.571065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9463c0, cid 0, qid 0 00:20:27.164 [2024-07-15 22:44:10.571286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.164 [2024-07-15 22:44:10.571302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.164 [2024-07-15 22:44:10.571309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9463c0) on tqpair=0x8e6540 00:20:27.164 [2024-07-15 22:44:10.571330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e6540) 00:20:27.164 [2024-07-15 22:44:10.571354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.164 [2024-07-15 22:44:10.571365] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8e6540) 00:20:27.164 [2024-07-15 22:44:10.571403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.164 [2024-07-15 22:44:10.571418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8e6540) 00:20:27.164 [2024-07-15 22:44:10.571441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.164 [2024-07-15 22:44:10.571450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.164 [2024-07-15 22:44:10.571472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.164 [2024-07-15 22:44:10.571481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:27.164 [2024-07-15 22:44:10.571501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:27.164 [2024-07-15 22:44:10.571514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e6540) 00:20:27.164 [2024-07-15 22:44:10.571546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.164 [2024-07-15 22:44:10.571569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9463c0, cid 0, qid 0 00:20:27.164 [2024-07-15 22:44:10.571579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946540, cid 1, qid 0 00:20:27.164 [2024-07-15 22:44:10.571602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9466c0, cid 2, qid 0 00:20:27.164 [2024-07-15 22:44:10.571610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.164 [2024-07-15 22:44:10.571617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9469c0, cid 4, qid 0 00:20:27.164 [2024-07-15 22:44:10.571854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.164 [2024-07-15 22:44:10.571867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.164 [2024-07-15 22:44:10.571874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9469c0) on tqpair=0x8e6540 00:20:27.164 [2024-07-15 22:44:10.571900] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:27.164 [2024-07-15 22:44:10.571910] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:27.164 [2024-07-15 22:44:10.571928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.571938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e6540) 00:20:27.164 [2024-07-15 22:44:10.571949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.164 [2024-07-15 22:44:10.571972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9469c0, cid 4, qid 0 00:20:27.164 [2024-07-15 22:44:10.572167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.164 [2024-07-15 22:44:10.572183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.164 [2024-07-15 22:44:10.572190] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572196] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e6540): datao=0, datal=4096, cccid=4 00:20:27.164 [2024-07-15 22:44:10.572204] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9469c0) on tqpair(0x8e6540): expected_datao=0, payload_size=4096 00:20:27.164 [2024-07-15 22:44:10.572216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572227] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572235] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.164 [2024-07-15 22:44:10.572303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.164 [2024-07-15 22:44:10.572310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9469c0) on tqpair=0x8e6540 00:20:27.164 [2024-07-15 22:44:10.572336] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:27.164 [2024-07-15 22:44:10.572379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e6540) 00:20:27.164 [2024-07-15 22:44:10.572402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.164 [2024-07-15 22:44:10.572414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e6540) 00:20:27.164 [2024-07-15 22:44:10.572453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.164 [2024-07-15 22:44:10.572480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9469c0, cid 4, qid 0 00:20:27.164 [2024-07-15 22:44:10.572492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946b40, cid 5, qid 0 00:20:27.164 [2024-07-15 22:44:10.572738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.164 [2024-07-15 22:44:10.572751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.164 [2024-07-15 22:44:10.572758] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572765] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e6540): datao=0, datal=1024, cccid=4 00:20:27.164 [2024-07-15 22:44:10.572772] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9469c0) on tqpair(0x8e6540): expected_datao=0, payload_size=1024 00:20:27.164 [2024-07-15 22:44:10.572780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572805] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572812] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.164 [2024-07-15 22:44:10.572829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.164 [2024-07-15 22:44:10.572836] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.572842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946b40) on tqpair=0x8e6540 00:20:27.164 [2024-07-15 22:44:10.613037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.164 [2024-07-15 22:44:10.613058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.164 [2024-07-15 22:44:10.613066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.164 [2024-07-15 22:44:10.613073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9469c0) on tqpair=0x8e6540 00:20:27.165 [2024-07-15 22:44:10.613094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.613104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e6540) 00:20:27.165 [2024-07-15 22:44:10.613116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.165 [2024-07-15 22:44:10.613153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9469c0, cid 4, qid 0 00:20:27.165 [2024-07-15 22:44:10.616889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.165 [2024-07-15 22:44:10.616907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.165 [2024-07-15 22:44:10.616914] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.616921] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e6540): datao=0, datal=3072, cccid=4 00:20:27.165 [2024-07-15 22:44:10.616929] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9469c0) on tqpair(0x8e6540): expected_datao=0, payload_size=3072 00:20:27.165 [2024-07-15 22:44:10.616936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.616947] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.616954] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.616963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.165 [2024-07-15 22:44:10.616972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.165 [2024-07-15 22:44:10.616978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.616985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9469c0) on tqpair=0x8e6540 00:20:27.165 [2024-07-15 22:44:10.617002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.617012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e6540) 00:20:27.165 [2024-07-15 22:44:10.617023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.165 [2024-07-15 22:44:10.617054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9469c0, cid 4, qid 0 00:20:27.165 [2024-07-15 22:44:10.617246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.165 [2024-07-15 22:44:10.617262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.165 [2024-07-15 22:44:10.617269] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.617275] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e6540): datao=0, datal=8, cccid=4 00:20:27.165 [2024-07-15 22:44:10.617283] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9469c0) on tqpair(0x8e6540): expected_datao=0, payload_size=8 00:20:27.165 [2024-07-15 22:44:10.617291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.617300] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.617308] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.658052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.165 [2024-07-15 22:44:10.658071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.165 [2024-07-15 22:44:10.658079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.165 [2024-07-15 22:44:10.658086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9469c0) on tqpair=0x8e6540 00:20:27.165 ===================================================== 00:20:27.165 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:27.165 ===================================================== 00:20:27.165 Controller Capabilities/Features 00:20:27.165 ================================ 00:20:27.165 Vendor ID: 0000 00:20:27.165 Subsystem Vendor ID: 0000 00:20:27.165 Serial Number: .................... 00:20:27.165 Model Number: ........................................ 00:20:27.165 Firmware Version: 24.09 00:20:27.165 Recommended Arb Burst: 0 00:20:27.165 IEEE OUI Identifier: 00 00 00 00:20:27.165 Multi-path I/O 00:20:27.165 May have multiple subsystem ports: No 00:20:27.165 May have multiple controllers: No 00:20:27.165 Associated with SR-IOV VF: No 00:20:27.165 Max Data Transfer Size: 131072 00:20:27.165 Max Number of Namespaces: 0 00:20:27.165 Max Number of I/O Queues: 1024 00:20:27.165 NVMe Specification Version (VS): 1.3 00:20:27.165 NVMe Specification Version (Identify): 1.3 00:20:27.165 Maximum Queue Entries: 128 00:20:27.165 Contiguous Queues Required: Yes 00:20:27.165 Arbitration Mechanisms Supported 00:20:27.165 Weighted Round Robin: Not Supported 00:20:27.165 Vendor Specific: Not Supported 00:20:27.165 Reset Timeout: 15000 ms 00:20:27.165 Doorbell Stride: 4 bytes 00:20:27.165 NVM Subsystem Reset: Not Supported 00:20:27.165 Command Sets Supported 00:20:27.165 NVM Command Set: Supported 00:20:27.165 Boot Partition: Not Supported 00:20:27.165 Memory Page Size Minimum: 4096 bytes 00:20:27.165 Memory Page Size Maximum: 4096 bytes 00:20:27.165 Persistent Memory Region: Not Supported 00:20:27.165 Optional Asynchronous Events Supported 00:20:27.165 Namespace Attribute Notices: Not Supported 00:20:27.165 Firmware Activation Notices: Not Supported 00:20:27.165 ANA Change Notices: Not Supported 00:20:27.165 PLE Aggregate Log Change Notices: Not Supported 00:20:27.165 LBA Status Info Alert Notices: Not Supported 00:20:27.165 EGE Aggregate Log Change Notices: Not Supported 00:20:27.165 Normal NVM Subsystem Shutdown event: Not Supported 00:20:27.165 Zone Descriptor Change Notices: Not Supported 00:20:27.165 Discovery Log Change Notices: Supported 00:20:27.165 Controller Attributes 00:20:27.165 128-bit Host Identifier: Not Supported 00:20:27.165 Non-Operational Permissive Mode: Not Supported 00:20:27.165 NVM Sets: Not Supported 00:20:27.165 Read Recovery Levels: Not Supported 00:20:27.165 Endurance Groups: Not Supported 00:20:27.165 Predictable Latency Mode: Not Supported 00:20:27.165 Traffic Based Keep ALive: Not Supported 00:20:27.165 Namespace Granularity: Not Supported 00:20:27.165 SQ Associations: Not Supported 00:20:27.165 UUID List: Not Supported 00:20:27.165 Multi-Domain Subsystem: Not Supported 00:20:27.165 Fixed Capacity Management: Not Supported 00:20:27.165 Variable Capacity Management: Not Supported 00:20:27.165 Delete Endurance Group: Not Supported 00:20:27.165 Delete NVM Set: Not Supported 00:20:27.165 Extended LBA Formats Supported: Not Supported 00:20:27.165 Flexible Data Placement Supported: Not Supported 00:20:27.165 00:20:27.165 Controller Memory Buffer Support 00:20:27.165 ================================ 00:20:27.165 Supported: No 00:20:27.165 00:20:27.165 Persistent Memory Region Support 00:20:27.165 ================================ 00:20:27.165 Supported: No 00:20:27.165 00:20:27.165 Admin Command Set Attributes 00:20:27.165 ============================ 00:20:27.165 Security Send/Receive: Not Supported 00:20:27.165 Format NVM: Not Supported 00:20:27.165 Firmware Activate/Download: Not Supported 00:20:27.165 Namespace Management: Not Supported 00:20:27.165 Device Self-Test: Not Supported 00:20:27.165 Directives: Not Supported 00:20:27.165 NVMe-MI: Not Supported 00:20:27.165 Virtualization Management: Not Supported 00:20:27.165 Doorbell Buffer Config: Not Supported 00:20:27.165 Get LBA Status Capability: Not Supported 00:20:27.165 Command & Feature Lockdown Capability: Not Supported 00:20:27.165 Abort Command Limit: 1 00:20:27.165 Async Event Request Limit: 4 00:20:27.165 Number of Firmware Slots: N/A 00:20:27.165 Firmware Slot 1 Read-Only: N/A 00:20:27.165 Firmware Activation Without Reset: N/A 00:20:27.165 Multiple Update Detection Support: N/A 00:20:27.165 Firmware Update Granularity: No Information Provided 00:20:27.165 Per-Namespace SMART Log: No 00:20:27.165 Asymmetric Namespace Access Log Page: Not Supported 00:20:27.165 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:27.165 Command Effects Log Page: Not Supported 00:20:27.165 Get Log Page Extended Data: Supported 00:20:27.165 Telemetry Log Pages: Not Supported 00:20:27.165 Persistent Event Log Pages: Not Supported 00:20:27.165 Supported Log Pages Log Page: May Support 00:20:27.165 Commands Supported & Effects Log Page: Not Supported 00:20:27.165 Feature Identifiers & Effects Log Page:May Support 00:20:27.165 NVMe-MI Commands & Effects Log Page: May Support 00:20:27.165 Data Area 4 for Telemetry Log: Not Supported 00:20:27.165 Error Log Page Entries Supported: 128 00:20:27.165 Keep Alive: Not Supported 00:20:27.165 00:20:27.165 NVM Command Set Attributes 00:20:27.165 ========================== 00:20:27.165 Submission Queue Entry Size 00:20:27.165 Max: 1 00:20:27.165 Min: 1 00:20:27.165 Completion Queue Entry Size 00:20:27.165 Max: 1 00:20:27.165 Min: 1 00:20:27.165 Number of Namespaces: 0 00:20:27.165 Compare Command: Not Supported 00:20:27.165 Write Uncorrectable Command: Not Supported 00:20:27.165 Dataset Management Command: Not Supported 00:20:27.165 Write Zeroes Command: Not Supported 00:20:27.165 Set Features Save Field: Not Supported 00:20:27.165 Reservations: Not Supported 00:20:27.165 Timestamp: Not Supported 00:20:27.165 Copy: Not Supported 00:20:27.165 Volatile Write Cache: Not Present 00:20:27.165 Atomic Write Unit (Normal): 1 00:20:27.165 Atomic Write Unit (PFail): 1 00:20:27.165 Atomic Compare & Write Unit: 1 00:20:27.165 Fused Compare & Write: Supported 00:20:27.165 Scatter-Gather List 00:20:27.165 SGL Command Set: Supported 00:20:27.165 SGL Keyed: Supported 00:20:27.165 SGL Bit Bucket Descriptor: Not Supported 00:20:27.165 SGL Metadata Pointer: Not Supported 00:20:27.165 Oversized SGL: Not Supported 00:20:27.165 SGL Metadata Address: Not Supported 00:20:27.165 SGL Offset: Supported 00:20:27.165 Transport SGL Data Block: Not Supported 00:20:27.165 Replay Protected Memory Block: Not Supported 00:20:27.165 00:20:27.165 Firmware Slot Information 00:20:27.165 ========================= 00:20:27.165 Active slot: 0 00:20:27.165 00:20:27.165 00:20:27.165 Error Log 00:20:27.165 ========= 00:20:27.165 00:20:27.165 Active Namespaces 00:20:27.165 ================= 00:20:27.165 Discovery Log Page 00:20:27.165 ================== 00:20:27.165 Generation Counter: 2 00:20:27.165 Number of Records: 2 00:20:27.165 Record Format: 0 00:20:27.166 00:20:27.166 Discovery Log Entry 0 00:20:27.166 ---------------------- 00:20:27.166 Transport Type: 3 (TCP) 00:20:27.166 Address Family: 1 (IPv4) 00:20:27.166 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:27.166 Entry Flags: 00:20:27.166 Duplicate Returned Information: 1 00:20:27.166 Explicit Persistent Connection Support for Discovery: 1 00:20:27.166 Transport Requirements: 00:20:27.166 Secure Channel: Not Required 00:20:27.166 Port ID: 0 (0x0000) 00:20:27.166 Controller ID: 65535 (0xffff) 00:20:27.166 Admin Max SQ Size: 128 00:20:27.166 Transport Service Identifier: 4420 00:20:27.166 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:27.166 Transport Address: 10.0.0.2 00:20:27.166 Discovery Log Entry 1 00:20:27.166 ---------------------- 00:20:27.166 Transport Type: 3 (TCP) 00:20:27.166 Address Family: 1 (IPv4) 00:20:27.166 Subsystem Type: 2 (NVM Subsystem) 00:20:27.166 Entry Flags: 00:20:27.166 Duplicate Returned Information: 0 00:20:27.166 Explicit Persistent Connection Support for Discovery: 0 00:20:27.166 Transport Requirements: 00:20:27.166 Secure Channel: Not Required 00:20:27.166 Port ID: 0 (0x0000) 00:20:27.166 Controller ID: 65535 (0xffff) 00:20:27.166 Admin Max SQ Size: 128 00:20:27.166 Transport Service Identifier: 4420 00:20:27.166 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:27.166 Transport Address: 10.0.0.2 [2024-07-15 22:44:10.658206] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:27.166 [2024-07-15 22:44:10.658229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9463c0) on tqpair=0x8e6540 00:20:27.166 [2024-07-15 22:44:10.658242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.166 [2024-07-15 22:44:10.658252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946540) on tqpair=0x8e6540 00:20:27.166 [2024-07-15 22:44:10.658260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.166 [2024-07-15 22:44:10.658268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9466c0) on tqpair=0x8e6540 00:20:27.166 [2024-07-15 22:44:10.658276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.166 [2024-07-15 22:44:10.658287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.166 [2024-07-15 22:44:10.658295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.166 [2024-07-15 22:44:10.658315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.658324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.658346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.166 [2024-07-15 22:44:10.658357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.166 [2024-07-15 22:44:10.658382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.166 [2024-07-15 22:44:10.658564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.166 [2024-07-15 22:44:10.658581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.166 [2024-07-15 22:44:10.658587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.658594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.166 [2024-07-15 22:44:10.658608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.658616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.658622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.166 [2024-07-15 22:44:10.658633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.166 [2024-07-15 22:44:10.658661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.166 [2024-07-15 22:44:10.658841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.166 [2024-07-15 22:44:10.658853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.166 [2024-07-15 22:44:10.658860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.658867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.166 [2024-07-15 22:44:10.658883] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:27.166 [2024-07-15 22:44:10.658894] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:27.166 [2024-07-15 22:44:10.658911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.658920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.658927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.166 [2024-07-15 22:44:10.658937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.166 [2024-07-15 22:44:10.658958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.166 [2024-07-15 22:44:10.659135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.166 [2024-07-15 22:44:10.659150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.166 [2024-07-15 22:44:10.659157] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.659164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.166 [2024-07-15 22:44:10.659182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.659192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.659198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.166 [2024-07-15 22:44:10.659208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.166 [2024-07-15 22:44:10.659234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.166 [2024-07-15 22:44:10.659379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.166 [2024-07-15 22:44:10.659395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.166 [2024-07-15 22:44:10.659402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.659408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.166 [2024-07-15 22:44:10.659424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.659434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.166 [2024-07-15 22:44:10.659441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.166 [2024-07-15 22:44:10.659451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.166 [2024-07-15 22:44:10.659471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.429 [2024-07-15 22:44:10.659617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.429 [2024-07-15 22:44:10.659632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.429 [2024-07-15 22:44:10.659639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.429 [2024-07-15 22:44:10.659646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.429 [2024-07-15 22:44:10.659662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.429 [2024-07-15 22:44:10.659672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.429 [2024-07-15 22:44:10.659678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.429 [2024-07-15 22:44:10.659689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.429 [2024-07-15 22:44:10.659709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.429 [2024-07-15 22:44:10.659854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.429 [2024-07-15 22:44:10.659870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.429 [2024-07-15 22:44:10.659883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.429 [2024-07-15 22:44:10.659890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.429 [2024-07-15 22:44:10.659907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.429 [2024-07-15 22:44:10.659917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.429 [2024-07-15 22:44:10.659923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.429 [2024-07-15 22:44:10.659934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.429 [2024-07-15 22:44:10.659954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.429 [2024-07-15 22:44:10.660105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.429 [2024-07-15 22:44:10.660120] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.429 [2024-07-15 22:44:10.660127] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.429 [2024-07-15 22:44:10.660134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.429 [2024-07-15 22:44:10.660150] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.429 [2024-07-15 22:44:10.660160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.660166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.430 [2024-07-15 22:44:10.660176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.430 [2024-07-15 22:44:10.660197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.430 [2024-07-15 22:44:10.660342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.430 [2024-07-15 22:44:10.660357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.430 [2024-07-15 22:44:10.660364] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.660371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.430 [2024-07-15 22:44:10.660387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.660396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.660403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.430 [2024-07-15 22:44:10.660413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.430 [2024-07-15 22:44:10.660434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.430 [2024-07-15 22:44:10.660600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.430 [2024-07-15 22:44:10.660612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.430 [2024-07-15 22:44:10.660619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.660626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.430 [2024-07-15 22:44:10.660642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.660651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.660658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.430 [2024-07-15 22:44:10.660668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.430 [2024-07-15 22:44:10.660688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.430 [2024-07-15 22:44:10.660845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.430 [2024-07-15 22:44:10.660860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.430 [2024-07-15 22:44:10.660867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.660873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.430 [2024-07-15 22:44:10.664905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.664916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.664923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e6540) 00:20:27.430 [2024-07-15 22:44:10.664933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.430 [2024-07-15 22:44:10.664955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946840, cid 3, qid 0 00:20:27.430 [2024-07-15 22:44:10.665138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.430 [2024-07-15 22:44:10.665154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.430 [2024-07-15 22:44:10.665161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.665168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x946840) on tqpair=0x8e6540 00:20:27.430 [2024-07-15 22:44:10.665182] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:27.430 00:20:27.430 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:27.430 [2024-07-15 22:44:10.700915] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:20:27.430 [2024-07-15 22:44:10.700960] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312206 ] 00:20:27.430 [2024-07-15 22:44:10.735641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:27.430 [2024-07-15 22:44:10.735698] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:27.430 [2024-07-15 22:44:10.735708] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:27.430 [2024-07-15 22:44:10.735725] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:27.430 [2024-07-15 22:44:10.735734] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:27.430 [2024-07-15 22:44:10.736053] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:27.430 [2024-07-15 22:44:10.736093] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8ff540 0 00:20:27.430 [2024-07-15 22:44:10.742908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:27.430 [2024-07-15 22:44:10.742926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:27.430 [2024-07-15 22:44:10.742933] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:27.430 [2024-07-15 22:44:10.742939] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:27.430 [2024-07-15 22:44:10.742991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.743003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.743010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ff540) 00:20:27.430 [2024-07-15 22:44:10.743024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:27.430 [2024-07-15 22:44:10.743050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f3c0, cid 0, qid 0 00:20:27.430 [2024-07-15 22:44:10.750892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.430 [2024-07-15 22:44:10.750909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.430 [2024-07-15 22:44:10.750916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.750923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f3c0) on tqpair=0x8ff540 00:20:27.430 [2024-07-15 22:44:10.750954] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:27.430 [2024-07-15 22:44:10.750966] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:27.430 [2024-07-15 22:44:10.750975] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:27.430 [2024-07-15 22:44:10.750992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ff540) 00:20:27.430 [2024-07-15 22:44:10.751018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.430 [2024-07-15 22:44:10.751042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f3c0, cid 0, qid 0 00:20:27.430 [2024-07-15 22:44:10.751230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.430 [2024-07-15 22:44:10.751243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.430 [2024-07-15 22:44:10.751250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f3c0) on tqpair=0x8ff540 00:20:27.430 [2024-07-15 22:44:10.751265] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:27.430 [2024-07-15 22:44:10.751283] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:27.430 [2024-07-15 22:44:10.751296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ff540) 00:20:27.430 [2024-07-15 22:44:10.751321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.430 [2024-07-15 22:44:10.751342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f3c0, cid 0, qid 0 00:20:27.430 [2024-07-15 22:44:10.751520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.430 [2024-07-15 22:44:10.751532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.430 [2024-07-15 22:44:10.751539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f3c0) on tqpair=0x8ff540 00:20:27.430 [2024-07-15 22:44:10.751554] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:27.430 [2024-07-15 22:44:10.751568] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:27.430 [2024-07-15 22:44:10.751580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ff540) 00:20:27.430 [2024-07-15 22:44:10.751604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.430 [2024-07-15 22:44:10.751624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f3c0, cid 0, qid 0 00:20:27.430 [2024-07-15 22:44:10.751763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.430 [2024-07-15 22:44:10.751775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.430 [2024-07-15 22:44:10.751782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f3c0) on tqpair=0x8ff540 00:20:27.430 [2024-07-15 22:44:10.751797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:27.430 [2024-07-15 22:44:10.751813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.751829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ff540) 00:20:27.430 [2024-07-15 22:44:10.751839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.430 [2024-07-15 22:44:10.751859] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f3c0, cid 0, qid 0 00:20:27.430 [2024-07-15 22:44:10.752013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.430 [2024-07-15 22:44:10.752029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.430 [2024-07-15 22:44:10.752036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.430 [2024-07-15 22:44:10.752042] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f3c0) on tqpair=0x8ff540 00:20:27.430 [2024-07-15 22:44:10.752050] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:27.430 [2024-07-15 22:44:10.752058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:27.430 [2024-07-15 22:44:10.752071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:27.430 [2024-07-15 22:44:10.752197] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:27.431 [2024-07-15 22:44:10.752205] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:27.431 [2024-07-15 22:44:10.752218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.752225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.752231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ff540) 00:20:27.431 [2024-07-15 22:44:10.752241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.431 [2024-07-15 22:44:10.752263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f3c0, cid 0, qid 0 00:20:27.431 [2024-07-15 22:44:10.752443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.431 [2024-07-15 22:44:10.752459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.431 [2024-07-15 22:44:10.752465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.752472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f3c0) on tqpair=0x8ff540 00:20:27.431 [2024-07-15 22:44:10.752481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:27.431 [2024-07-15 22:44:10.752497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.752506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.752513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ff540) 00:20:27.431 [2024-07-15 22:44:10.752523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.431 [2024-07-15 22:44:10.752544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f3c0, cid 0, qid 0 00:20:27.431 [2024-07-15 22:44:10.752679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.431 [2024-07-15 22:44:10.752694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.431 [2024-07-15 22:44:10.752701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.752707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f3c0) on tqpair=0x8ff540 00:20:27.431 [2024-07-15 22:44:10.752715] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:27.431 [2024-07-15 22:44:10.752723] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:27.431 [2024-07-15 22:44:10.752737] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:27.431 [2024-07-15 22:44:10.752750] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:27.431 [2024-07-15 22:44:10.752765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.752772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ff540) 00:20:27.431 [2024-07-15 22:44:10.752783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.431 [2024-07-15 22:44:10.752804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f3c0, cid 0, qid 0 00:20:27.431 [2024-07-15 22:44:10.753109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.431 [2024-07-15 22:44:10.753125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.431 [2024-07-15 22:44:10.753133] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753143] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ff540): datao=0, datal=4096, cccid=0 00:20:27.431 [2024-07-15 22:44:10.753151] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x95f3c0) on tqpair(0x8ff540): expected_datao=0, payload_size=4096 00:20:27.431 [2024-07-15 22:44:10.753158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753169] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753176] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.431 [2024-07-15 22:44:10.753221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.431 [2024-07-15 22:44:10.753227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f3c0) on tqpair=0x8ff540 00:20:27.431 [2024-07-15 22:44:10.753245] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:27.431 [2024-07-15 22:44:10.753259] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:27.431 [2024-07-15 22:44:10.753273] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:27.431 [2024-07-15 22:44:10.753285] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:27.431 [2024-07-15 22:44:10.753297] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:27.431 [2024-07-15 22:44:10.753310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:27.431 [2024-07-15 22:44:10.753332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:27.431 [2024-07-15 22:44:10.753346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753360] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ff540) 00:20:27.431 [2024-07-15 22:44:10.753371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.431 [2024-07-15 22:44:10.753407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f3c0, cid 0, qid 0 00:20:27.431 [2024-07-15 22:44:10.753622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.431 [2024-07-15 22:44:10.753638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.431 [2024-07-15 22:44:10.753645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f3c0) on tqpair=0x8ff540 00:20:27.431 [2024-07-15 22:44:10.753662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8ff540) 00:20:27.431 [2024-07-15 22:44:10.753686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.431 [2024-07-15 22:44:10.753696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8ff540) 00:20:27.431 [2024-07-15 22:44:10.753718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.431 [2024-07-15 22:44:10.753728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8ff540) 00:20:27.431 [2024-07-15 22:44:10.753754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.431 [2024-07-15 22:44:10.753779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ff540) 00:20:27.431 [2024-07-15 22:44:10.753800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.431 [2024-07-15 22:44:10.753809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:27.431 [2024-07-15 22:44:10.753827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:27.431 [2024-07-15 22:44:10.753839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.753846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ff540) 00:20:27.431 [2024-07-15 22:44:10.753856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.431 [2024-07-15 22:44:10.753899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f3c0, cid 0, qid 0 00:20:27.431 [2024-07-15 22:44:10.753912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f540, cid 1, qid 0 00:20:27.431 [2024-07-15 22:44:10.753920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f6c0, cid 2, qid 0 00:20:27.431 [2024-07-15 22:44:10.753928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f840, cid 3, qid 0 00:20:27.431 [2024-07-15 22:44:10.753935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f9c0, cid 4, qid 0 00:20:27.431 [2024-07-15 22:44:10.754119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.431 [2024-07-15 22:44:10.754134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.431 [2024-07-15 22:44:10.754141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.431 [2024-07-15 22:44:10.754148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f9c0) on tqpair=0x8ff540 00:20:27.431 [2024-07-15 22:44:10.754156] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:27.432 [2024-07-15 22:44:10.754165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.754179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.754192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.754218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.754225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.754231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ff540) 00:20:27.432 [2024-07-15 22:44:10.754241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.432 [2024-07-15 22:44:10.754262] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f9c0, cid 4, qid 0 00:20:27.432 [2024-07-15 22:44:10.754446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.432 [2024-07-15 22:44:10.754462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.432 [2024-07-15 22:44:10.754469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.754479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f9c0) on tqpair=0x8ff540 00:20:27.432 [2024-07-15 22:44:10.754546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.754566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.754581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.754589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ff540) 00:20:27.432 [2024-07-15 22:44:10.754599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.432 [2024-07-15 22:44:10.754621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f9c0, cid 4, qid 0 00:20:27.432 [2024-07-15 22:44:10.754811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.432 [2024-07-15 22:44:10.754824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.432 [2024-07-15 22:44:10.754831] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.754837] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ff540): datao=0, datal=4096, cccid=4 00:20:27.432 [2024-07-15 22:44:10.754845] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x95f9c0) on tqpair(0x8ff540): expected_datao=0, payload_size=4096 00:20:27.432 [2024-07-15 22:44:10.754852] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.754862] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.754870] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.758891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.432 [2024-07-15 22:44:10.758906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.432 [2024-07-15 22:44:10.758913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.758919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f9c0) on tqpair=0x8ff540 00:20:27.432 [2024-07-15 22:44:10.758937] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:27.432 [2024-07-15 22:44:10.758958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.758992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.759008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ff540) 00:20:27.432 [2024-07-15 22:44:10.759026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.432 [2024-07-15 22:44:10.759049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f9c0, cid 4, qid 0 00:20:27.432 [2024-07-15 22:44:10.759255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.432 [2024-07-15 22:44:10.759268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.432 [2024-07-15 22:44:10.759275] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759281] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ff540): datao=0, datal=4096, cccid=4 00:20:27.432 [2024-07-15 22:44:10.759289] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x95f9c0) on tqpair(0x8ff540): expected_datao=0, payload_size=4096 00:20:27.432 [2024-07-15 22:44:10.759296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759306] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759314] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.432 [2024-07-15 22:44:10.759372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.432 [2024-07-15 22:44:10.759379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f9c0) on tqpair=0x8ff540 00:20:27.432 [2024-07-15 22:44:10.759410] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.759430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.759444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ff540) 00:20:27.432 [2024-07-15 22:44:10.759463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.432 [2024-07-15 22:44:10.759484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f9c0, cid 4, qid 0 00:20:27.432 [2024-07-15 22:44:10.759634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.432 [2024-07-15 22:44:10.759647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.432 [2024-07-15 22:44:10.759654] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759660] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ff540): datao=0, datal=4096, cccid=4 00:20:27.432 [2024-07-15 22:44:10.759667] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x95f9c0) on tqpair(0x8ff540): expected_datao=0, payload_size=4096 00:20:27.432 [2024-07-15 22:44:10.759675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759684] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759692] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.432 [2024-07-15 22:44:10.759745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.432 [2024-07-15 22:44:10.759752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f9c0) on tqpair=0x8ff540 00:20:27.432 [2024-07-15 22:44:10.759774] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.759789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.759804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.759816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.759825] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.759834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.759843] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:27.432 [2024-07-15 22:44:10.759851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:27.432 [2024-07-15 22:44:10.759860] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:27.432 [2024-07-15 22:44:10.759885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ff540) 00:20:27.432 [2024-07-15 22:44:10.759912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.432 [2024-07-15 22:44:10.759924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.759938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8ff540) 00:20:27.432 [2024-07-15 22:44:10.759946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.432 [2024-07-15 22:44:10.759971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f9c0, cid 4, qid 0 00:20:27.432 [2024-07-15 22:44:10.759983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95fb40, cid 5, qid 0 00:20:27.432 [2024-07-15 22:44:10.760167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.432 [2024-07-15 22:44:10.760182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.432 [2024-07-15 22:44:10.760189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.760196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f9c0) on tqpair=0x8ff540 00:20:27.432 [2024-07-15 22:44:10.760206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.432 [2024-07-15 22:44:10.760215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.432 [2024-07-15 22:44:10.760222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.760228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95fb40) on tqpair=0x8ff540 00:20:27.432 [2024-07-15 22:44:10.760244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.760253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8ff540) 00:20:27.432 [2024-07-15 22:44:10.760264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.432 [2024-07-15 22:44:10.760299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95fb40, cid 5, qid 0 00:20:27.432 [2024-07-15 22:44:10.760519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.432 [2024-07-15 22:44:10.760532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.432 [2024-07-15 22:44:10.760539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.432 [2024-07-15 22:44:10.760546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95fb40) on tqpair=0x8ff540 00:20:27.432 [2024-07-15 22:44:10.760561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.760570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8ff540) 00:20:27.433 [2024-07-15 22:44:10.760580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.433 [2024-07-15 22:44:10.760600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95fb40, cid 5, qid 0 00:20:27.433 [2024-07-15 22:44:10.760737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.433 [2024-07-15 22:44:10.760749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.433 [2024-07-15 22:44:10.760756] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.760763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95fb40) on tqpair=0x8ff540 00:20:27.433 [2024-07-15 22:44:10.760778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.760787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8ff540) 00:20:27.433 [2024-07-15 22:44:10.760798] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.433 [2024-07-15 22:44:10.760821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95fb40, cid 5, qid 0 00:20:27.433 [2024-07-15 22:44:10.760971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.433 [2024-07-15 22:44:10.760986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.433 [2024-07-15 22:44:10.760993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95fb40) on tqpair=0x8ff540 00:20:27.433 [2024-07-15 22:44:10.761024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8ff540) 00:20:27.433 [2024-07-15 22:44:10.761046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.433 [2024-07-15 22:44:10.761059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8ff540) 00:20:27.433 [2024-07-15 22:44:10.761075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.433 [2024-07-15 22:44:10.761087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8ff540) 00:20:27.433 [2024-07-15 22:44:10.761104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.433 [2024-07-15 22:44:10.761115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8ff540) 00:20:27.433 [2024-07-15 22:44:10.761132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.433 [2024-07-15 22:44:10.761154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95fb40, cid 5, qid 0 00:20:27.433 [2024-07-15 22:44:10.761165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f9c0, cid 4, qid 0 00:20:27.433 [2024-07-15 22:44:10.761173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95fcc0, cid 6, qid 0 00:20:27.433 [2024-07-15 22:44:10.761181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95fe40, cid 7, qid 0 00:20:27.433 [2024-07-15 22:44:10.761428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.433 [2024-07-15 22:44:10.761440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.433 [2024-07-15 22:44:10.761447] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761453] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ff540): datao=0, datal=8192, cccid=5 00:20:27.433 [2024-07-15 22:44:10.761461] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x95fb40) on tqpair(0x8ff540): expected_datao=0, payload_size=8192 00:20:27.433 [2024-07-15 22:44:10.761468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761571] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761581] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.433 [2024-07-15 22:44:10.761598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.433 [2024-07-15 22:44:10.761605] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761611] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ff540): datao=0, datal=512, cccid=4 00:20:27.433 [2024-07-15 22:44:10.761619] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x95f9c0) on tqpair(0x8ff540): expected_datao=0, payload_size=512 00:20:27.433 [2024-07-15 22:44:10.761630] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761639] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761646] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.433 [2024-07-15 22:44:10.761663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.433 [2024-07-15 22:44:10.761669] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761676] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ff540): datao=0, datal=512, cccid=6 00:20:27.433 [2024-07-15 22:44:10.761683] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x95fcc0) on tqpair(0x8ff540): expected_datao=0, payload_size=512 00:20:27.433 [2024-07-15 22:44:10.761690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761699] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761706] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.433 [2024-07-15 22:44:10.761722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.433 [2024-07-15 22:44:10.761729] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761735] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8ff540): datao=0, datal=4096, cccid=7 00:20:27.433 [2024-07-15 22:44:10.761742] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x95fe40) on tqpair(0x8ff540): expected_datao=0, payload_size=4096 00:20:27.433 [2024-07-15 22:44:10.761749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761759] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761766] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.433 [2024-07-15 22:44:10.761786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.433 [2024-07-15 22:44:10.761793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95fb40) on tqpair=0x8ff540 00:20:27.433 [2024-07-15 22:44:10.761817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.433 [2024-07-15 22:44:10.761829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.433 [2024-07-15 22:44:10.761835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f9c0) on tqpair=0x8ff540 00:20:27.433 [2024-07-15 22:44:10.761857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.433 [2024-07-15 22:44:10.761867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.433 [2024-07-15 22:44:10.761874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95fcc0) on tqpair=0x8ff540 00:20:27.433 [2024-07-15 22:44:10.761898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.433 [2024-07-15 22:44:10.761907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.433 [2024-07-15 22:44:10.761914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.433 [2024-07-15 22:44:10.761920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95fe40) on tqpair=0x8ff540 00:20:27.433 ===================================================== 00:20:27.433 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.433 ===================================================== 00:20:27.433 Controller Capabilities/Features 00:20:27.433 ================================ 00:20:27.433 Vendor ID: 8086 00:20:27.433 Subsystem Vendor ID: 8086 00:20:27.433 Serial Number: SPDK00000000000001 00:20:27.433 Model Number: SPDK bdev Controller 00:20:27.433 Firmware Version: 24.09 00:20:27.433 Recommended Arb Burst: 6 00:20:27.433 IEEE OUI Identifier: e4 d2 5c 00:20:27.433 Multi-path I/O 00:20:27.434 May have multiple subsystem ports: Yes 00:20:27.434 May have multiple controllers: Yes 00:20:27.434 Associated with SR-IOV VF: No 00:20:27.434 Max Data Transfer Size: 131072 00:20:27.434 Max Number of Namespaces: 32 00:20:27.434 Max Number of I/O Queues: 127 00:20:27.434 NVMe Specification Version (VS): 1.3 00:20:27.434 NVMe Specification Version (Identify): 1.3 00:20:27.434 Maximum Queue Entries: 128 00:20:27.434 Contiguous Queues Required: Yes 00:20:27.434 Arbitration Mechanisms Supported 00:20:27.434 Weighted Round Robin: Not Supported 00:20:27.434 Vendor Specific: Not Supported 00:20:27.434 Reset Timeout: 15000 ms 00:20:27.434 Doorbell Stride: 4 bytes 00:20:27.434 NVM Subsystem Reset: Not Supported 00:20:27.434 Command Sets Supported 00:20:27.434 NVM Command Set: Supported 00:20:27.434 Boot Partition: Not Supported 00:20:27.434 Memory Page Size Minimum: 4096 bytes 00:20:27.434 Memory Page Size Maximum: 4096 bytes 00:20:27.434 Persistent Memory Region: Not Supported 00:20:27.434 Optional Asynchronous Events Supported 00:20:27.434 Namespace Attribute Notices: Supported 00:20:27.434 Firmware Activation Notices: Not Supported 00:20:27.434 ANA Change Notices: Not Supported 00:20:27.434 PLE Aggregate Log Change Notices: Not Supported 00:20:27.434 LBA Status Info Alert Notices: Not Supported 00:20:27.434 EGE Aggregate Log Change Notices: Not Supported 00:20:27.434 Normal NVM Subsystem Shutdown event: Not Supported 00:20:27.434 Zone Descriptor Change Notices: Not Supported 00:20:27.434 Discovery Log Change Notices: Not Supported 00:20:27.434 Controller Attributes 00:20:27.434 128-bit Host Identifier: Supported 00:20:27.434 Non-Operational Permissive Mode: Not Supported 00:20:27.434 NVM Sets: Not Supported 00:20:27.434 Read Recovery Levels: Not Supported 00:20:27.434 Endurance Groups: Not Supported 00:20:27.434 Predictable Latency Mode: Not Supported 00:20:27.434 Traffic Based Keep ALive: Not Supported 00:20:27.434 Namespace Granularity: Not Supported 00:20:27.434 SQ Associations: Not Supported 00:20:27.434 UUID List: Not Supported 00:20:27.434 Multi-Domain Subsystem: Not Supported 00:20:27.434 Fixed Capacity Management: Not Supported 00:20:27.434 Variable Capacity Management: Not Supported 00:20:27.434 Delete Endurance Group: Not Supported 00:20:27.434 Delete NVM Set: Not Supported 00:20:27.434 Extended LBA Formats Supported: Not Supported 00:20:27.434 Flexible Data Placement Supported: Not Supported 00:20:27.434 00:20:27.434 Controller Memory Buffer Support 00:20:27.434 ================================ 00:20:27.434 Supported: No 00:20:27.434 00:20:27.434 Persistent Memory Region Support 00:20:27.434 ================================ 00:20:27.434 Supported: No 00:20:27.434 00:20:27.434 Admin Command Set Attributes 00:20:27.434 ============================ 00:20:27.434 Security Send/Receive: Not Supported 00:20:27.434 Format NVM: Not Supported 00:20:27.434 Firmware Activate/Download: Not Supported 00:20:27.434 Namespace Management: Not Supported 00:20:27.434 Device Self-Test: Not Supported 00:20:27.434 Directives: Not Supported 00:20:27.434 NVMe-MI: Not Supported 00:20:27.434 Virtualization Management: Not Supported 00:20:27.434 Doorbell Buffer Config: Not Supported 00:20:27.434 Get LBA Status Capability: Not Supported 00:20:27.434 Command & Feature Lockdown Capability: Not Supported 00:20:27.434 Abort Command Limit: 4 00:20:27.434 Async Event Request Limit: 4 00:20:27.434 Number of Firmware Slots: N/A 00:20:27.434 Firmware Slot 1 Read-Only: N/A 00:20:27.434 Firmware Activation Without Reset: N/A 00:20:27.434 Multiple Update Detection Support: N/A 00:20:27.434 Firmware Update Granularity: No Information Provided 00:20:27.434 Per-Namespace SMART Log: No 00:20:27.434 Asymmetric Namespace Access Log Page: Not Supported 00:20:27.434 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:27.434 Command Effects Log Page: Supported 00:20:27.434 Get Log Page Extended Data: Supported 00:20:27.434 Telemetry Log Pages: Not Supported 00:20:27.434 Persistent Event Log Pages: Not Supported 00:20:27.434 Supported Log Pages Log Page: May Support 00:20:27.434 Commands Supported & Effects Log Page: Not Supported 00:20:27.434 Feature Identifiers & Effects Log Page:May Support 00:20:27.434 NVMe-MI Commands & Effects Log Page: May Support 00:20:27.434 Data Area 4 for Telemetry Log: Not Supported 00:20:27.434 Error Log Page Entries Supported: 128 00:20:27.434 Keep Alive: Supported 00:20:27.434 Keep Alive Granularity: 10000 ms 00:20:27.434 00:20:27.434 NVM Command Set Attributes 00:20:27.434 ========================== 00:20:27.434 Submission Queue Entry Size 00:20:27.434 Max: 64 00:20:27.434 Min: 64 00:20:27.434 Completion Queue Entry Size 00:20:27.434 Max: 16 00:20:27.434 Min: 16 00:20:27.434 Number of Namespaces: 32 00:20:27.434 Compare Command: Supported 00:20:27.434 Write Uncorrectable Command: Not Supported 00:20:27.434 Dataset Management Command: Supported 00:20:27.434 Write Zeroes Command: Supported 00:20:27.434 Set Features Save Field: Not Supported 00:20:27.434 Reservations: Supported 00:20:27.434 Timestamp: Not Supported 00:20:27.434 Copy: Supported 00:20:27.434 Volatile Write Cache: Present 00:20:27.434 Atomic Write Unit (Normal): 1 00:20:27.434 Atomic Write Unit (PFail): 1 00:20:27.434 Atomic Compare & Write Unit: 1 00:20:27.434 Fused Compare & Write: Supported 00:20:27.434 Scatter-Gather List 00:20:27.434 SGL Command Set: Supported 00:20:27.434 SGL Keyed: Supported 00:20:27.434 SGL Bit Bucket Descriptor: Not Supported 00:20:27.434 SGL Metadata Pointer: Not Supported 00:20:27.434 Oversized SGL: Not Supported 00:20:27.434 SGL Metadata Address: Not Supported 00:20:27.434 SGL Offset: Supported 00:20:27.434 Transport SGL Data Block: Not Supported 00:20:27.434 Replay Protected Memory Block: Not Supported 00:20:27.434 00:20:27.434 Firmware Slot Information 00:20:27.434 ========================= 00:20:27.434 Active slot: 1 00:20:27.434 Slot 1 Firmware Revision: 24.09 00:20:27.434 00:20:27.434 00:20:27.434 Commands Supported and Effects 00:20:27.434 ============================== 00:20:27.434 Admin Commands 00:20:27.434 -------------- 00:20:27.434 Get Log Page (02h): Supported 00:20:27.434 Identify (06h): Supported 00:20:27.434 Abort (08h): Supported 00:20:27.434 Set Features (09h): Supported 00:20:27.434 Get Features (0Ah): Supported 00:20:27.434 Asynchronous Event Request (0Ch): Supported 00:20:27.434 Keep Alive (18h): Supported 00:20:27.434 I/O Commands 00:20:27.434 ------------ 00:20:27.434 Flush (00h): Supported LBA-Change 00:20:27.434 Write (01h): Supported LBA-Change 00:20:27.434 Read (02h): Supported 00:20:27.434 Compare (05h): Supported 00:20:27.434 Write Zeroes (08h): Supported LBA-Change 00:20:27.434 Dataset Management (09h): Supported LBA-Change 00:20:27.434 Copy (19h): Supported LBA-Change 00:20:27.434 00:20:27.434 Error Log 00:20:27.434 ========= 00:20:27.434 00:20:27.434 Arbitration 00:20:27.434 =========== 00:20:27.434 Arbitration Burst: 1 00:20:27.434 00:20:27.434 Power Management 00:20:27.434 ================ 00:20:27.434 Number of Power States: 1 00:20:27.434 Current Power State: Power State #0 00:20:27.434 Power State #0: 00:20:27.434 Max Power: 0.00 W 00:20:27.435 Non-Operational State: Operational 00:20:27.435 Entry Latency: Not Reported 00:20:27.435 Exit Latency: Not Reported 00:20:27.435 Relative Read Throughput: 0 00:20:27.435 Relative Read Latency: 0 00:20:27.435 Relative Write Throughput: 0 00:20:27.435 Relative Write Latency: 0 00:20:27.435 Idle Power: Not Reported 00:20:27.435 Active Power: Not Reported 00:20:27.435 Non-Operational Permissive Mode: Not Supported 00:20:27.435 00:20:27.435 Health Information 00:20:27.435 ================== 00:20:27.435 Critical Warnings: 00:20:27.435 Available Spare Space: OK 00:20:27.435 Temperature: OK 00:20:27.435 Device Reliability: OK 00:20:27.435 Read Only: No 00:20:27.435 Volatile Memory Backup: OK 00:20:27.435 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:27.435 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:27.435 Available Spare: 0% 00:20:27.435 Available Spare Threshold: 0% 00:20:27.435 Life Percentage Used:[2024-07-15 22:44:10.762045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.762057] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8ff540) 00:20:27.435 [2024-07-15 22:44:10.762068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.435 [2024-07-15 22:44:10.762090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95fe40, cid 7, qid 0 00:20:27.435 [2024-07-15 22:44:10.762288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.435 [2024-07-15 22:44:10.762302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.435 [2024-07-15 22:44:10.762309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.762316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95fe40) on tqpair=0x8ff540 00:20:27.435 [2024-07-15 22:44:10.762361] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:27.435 [2024-07-15 22:44:10.762380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f3c0) on tqpair=0x8ff540 00:20:27.435 [2024-07-15 22:44:10.762391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.435 [2024-07-15 22:44:10.762400] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f540) on tqpair=0x8ff540 00:20:27.435 [2024-07-15 22:44:10.762407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.435 [2024-07-15 22:44:10.762431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f6c0) on tqpair=0x8ff540 00:20:27.435 [2024-07-15 22:44:10.762438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.435 [2024-07-15 22:44:10.762446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f840) on tqpair=0x8ff540 00:20:27.435 [2024-07-15 22:44:10.762453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.435 [2024-07-15 22:44:10.762465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.762473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.762479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ff540) 00:20:27.435 [2024-07-15 22:44:10.762489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.435 [2024-07-15 22:44:10.762510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f840, cid 3, qid 0 00:20:27.435 [2024-07-15 22:44:10.762690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.435 [2024-07-15 22:44:10.762705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.435 [2024-07-15 22:44:10.762712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.762719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f840) on tqpair=0x8ff540 00:20:27.435 [2024-07-15 22:44:10.762730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.762738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.762744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ff540) 00:20:27.435 [2024-07-15 22:44:10.762755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.435 [2024-07-15 22:44:10.762781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f840, cid 3, qid 0 00:20:27.435 [2024-07-15 22:44:10.766902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.435 [2024-07-15 22:44:10.766919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.435 [2024-07-15 22:44:10.766927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.766933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f840) on tqpair=0x8ff540 00:20:27.435 [2024-07-15 22:44:10.766941] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:27.435 [2024-07-15 22:44:10.766949] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:27.435 [2024-07-15 22:44:10.766966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.766994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.767001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8ff540) 00:20:27.435 [2024-07-15 22:44:10.767013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.435 [2024-07-15 22:44:10.767035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x95f840, cid 3, qid 0 00:20:27.435 [2024-07-15 22:44:10.767205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.435 [2024-07-15 22:44:10.767220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.435 [2024-07-15 22:44:10.767228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.435 [2024-07-15 22:44:10.767234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x95f840) on tqpair=0x8ff540 00:20:27.435 [2024-07-15 22:44:10.767248] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:20:27.435 0% 00:20:27.435 Data Units Read: 0 00:20:27.435 Data Units Written: 0 00:20:27.435 Host Read Commands: 0 00:20:27.435 Host Write Commands: 0 00:20:27.435 Controller Busy Time: 0 minutes 00:20:27.435 Power Cycles: 0 00:20:27.435 Power On Hours: 0 hours 00:20:27.435 Unsafe Shutdowns: 0 00:20:27.435 Unrecoverable Media Errors: 0 00:20:27.435 Lifetime Error Log Entries: 0 00:20:27.435 Warning Temperature Time: 0 minutes 00:20:27.435 Critical Temperature Time: 0 minutes 00:20:27.435 00:20:27.435 Number of Queues 00:20:27.435 ================ 00:20:27.435 Number of I/O Submission Queues: 127 00:20:27.435 Number of I/O Completion Queues: 127 00:20:27.435 00:20:27.435 Active Namespaces 00:20:27.435 ================= 00:20:27.435 Namespace ID:1 00:20:27.435 Error Recovery Timeout: Unlimited 00:20:27.435 Command Set Identifier: NVM (00h) 00:20:27.435 Deallocate: Supported 00:20:27.435 Deallocated/Unwritten Error: Not Supported 00:20:27.435 Deallocated Read Value: Unknown 00:20:27.435 Deallocate in Write Zeroes: Not Supported 00:20:27.435 Deallocated Guard Field: 0xFFFF 00:20:27.435 Flush: Supported 00:20:27.435 Reservation: Supported 00:20:27.435 Namespace Sharing Capabilities: Multiple Controllers 00:20:27.435 Size (in LBAs): 131072 (0GiB) 00:20:27.435 Capacity (in LBAs): 131072 (0GiB) 00:20:27.435 Utilization (in LBAs): 131072 (0GiB) 00:20:27.435 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:27.435 EUI64: ABCDEF0123456789 00:20:27.435 UUID: 857998e5-a485-4bcf-a9ac-8653af12c85e 00:20:27.435 Thin Provisioning: Not Supported 00:20:27.435 Per-NS Atomic Units: Yes 00:20:27.435 Atomic Boundary Size (Normal): 0 00:20:27.435 Atomic Boundary Size (PFail): 0 00:20:27.435 Atomic Boundary Offset: 0 00:20:27.435 Maximum Single Source Range Length: 65535 00:20:27.435 Maximum Copy Length: 65535 00:20:27.436 Maximum Source Range Count: 1 00:20:27.436 NGUID/EUI64 Never Reused: No 00:20:27.436 Namespace Write Protected: No 00:20:27.436 Number of LBA Formats: 1 00:20:27.436 Current LBA Format: LBA Format #00 00:20:27.436 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:27.436 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.436 rmmod nvme_tcp 00:20:27.436 rmmod nvme_fabrics 00:20:27.436 rmmod nvme_keyring 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1312056 ']' 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1312056 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@942 -- # '[' -z 1312056 ']' 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # kill -0 1312056 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # uname 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1312056 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1312056' 00:20:27.436 killing process with pid 1312056 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@961 -- # kill 1312056 00:20:27.436 22:44:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # wait 1312056 00:20:27.695 22:44:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:27.695 22:44:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:27.695 22:44:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:27.695 22:44:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.695 22:44:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.695 22:44:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.695 22:44:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.695 22:44:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.227 22:44:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.227 00:20:30.227 real 0m5.389s 00:20:30.227 user 0m4.223s 00:20:30.227 sys 0m1.880s 00:20:30.227 22:44:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:30.227 22:44:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:30.227 ************************************ 00:20:30.227 END TEST nvmf_identify 00:20:30.227 ************************************ 00:20:30.227 22:44:13 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:30.227 22:44:13 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:30.227 22:44:13 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:30.227 22:44:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:30.227 22:44:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:30.227 ************************************ 00:20:30.227 START TEST nvmf_perf 00:20:30.227 ************************************ 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:30.227 * Looking for test storage... 00:20:30.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.227 22:44:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:32.125 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:32.125 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:32.125 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:32.125 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.125 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:32.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:20:32.126 00:20:32.126 --- 10.0.0.2 ping statistics --- 00:20:32.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.126 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:20:32.126 00:20:32.126 --- 10.0.0.1 ping statistics --- 00:20:32.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.126 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1314133 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1314133 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@823 -- # '[' -z 1314133 ']' 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:32.126 22:44:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:32.126 [2024-07-15 22:44:15.410220] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:20:32.126 [2024-07-15 22:44:15.410301] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.126 [2024-07-15 22:44:15.479246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.126 [2024-07-15 22:44:15.596701] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.126 [2024-07-15 22:44:15.596764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.126 [2024-07-15 22:44:15.596780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.126 [2024-07-15 22:44:15.596795] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.126 [2024-07-15 22:44:15.596807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.126 [2024-07-15 22:44:15.596928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.126 [2024-07-15 22:44:15.596956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.126 [2024-07-15 22:44:15.597015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.126 [2024-07-15 22:44:15.597018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.058 22:44:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:33.058 22:44:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # return 0 00:20:33.058 22:44:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.058 22:44:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.058 22:44:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:33.058 22:44:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.058 22:44:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:33.058 22:44:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:36.338 22:44:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:36.338 22:44:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:36.338 22:44:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:36.338 22:44:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:36.595 22:44:20 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:36.595 22:44:20 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:36.596 22:44:20 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:36.596 22:44:20 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:36.596 22:44:20 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:36.853 [2024-07-15 22:44:20.239049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.853 22:44:20 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.111 22:44:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:37.111 22:44:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.369 22:44:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:37.369 22:44:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:37.626 22:44:21 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.884 [2024-07-15 22:44:21.242579] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.884 22:44:21 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:38.142 22:44:21 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:38.142 22:44:21 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:38.142 22:44:21 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:38.142 22:44:21 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:39.548 Initializing NVMe Controllers 00:20:39.548 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:39.548 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:39.548 Initialization complete. Launching workers. 00:20:39.548 ======================================================== 00:20:39.548 Latency(us) 00:20:39.548 Device Information : IOPS MiB/s Average min max 00:20:39.548 PCIE (0000:88:00.0) NSID 1 from core 0: 85905.80 335.57 372.03 27.64 8292.10 00:20:39.548 ======================================================== 00:20:39.548 Total : 85905.80 335.57 372.03 27.64 8292.10 00:20:39.548 00:20:39.548 22:44:22 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:40.935 Initializing NVMe Controllers 00:20:40.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:40.935 Initialization complete. Launching workers. 00:20:40.935 ======================================================== 00:20:40.935 Latency(us) 00:20:40.935 Device Information : IOPS MiB/s Average min max 00:20:40.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 99.00 0.39 10248.98 199.10 46159.62 00:20:40.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17937.05 7942.79 47899.91 00:20:40.935 ======================================================== 00:20:40.935 Total : 155.00 0.61 13026.61 199.10 47899.91 00:20:40.935 00:20:40.935 22:44:24 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:41.864 Initializing NVMe Controllers 00:20:41.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:41.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:41.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:41.864 Initialization complete. Launching workers. 00:20:41.864 ======================================================== 00:20:41.864 Latency(us) 00:20:41.864 Device Information : IOPS MiB/s Average min max 00:20:41.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8058.99 31.48 3985.15 804.99 7969.23 00:20:41.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3907.00 15.26 8218.42 3932.78 16070.03 00:20:41.864 ======================================================== 00:20:41.864 Total : 11965.99 46.74 5367.35 804.99 16070.03 00:20:41.864 00:20:42.120 22:44:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:42.120 22:44:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:42.120 22:44:25 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:44.641 Initializing NVMe Controllers 00:20:44.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.641 Controller IO queue size 128, less than required. 00:20:44.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:44.641 Controller IO queue size 128, less than required. 00:20:44.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:44.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:44.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:44.641 Initialization complete. Launching workers. 00:20:44.641 ======================================================== 00:20:44.641 Latency(us) 00:20:44.641 Device Information : IOPS MiB/s Average min max 00:20:44.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 954.48 238.62 139071.56 74312.71 185982.33 00:20:44.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 572.99 143.25 229879.36 71132.77 337483.42 00:20:44.641 ======================================================== 00:20:44.641 Total : 1527.47 381.87 173135.63 71132.77 337483.42 00:20:44.641 00:20:44.641 22:44:28 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:44.898 No valid NVMe controllers or AIO or URING devices found 00:20:44.898 Initializing NVMe Controllers 00:20:44.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.898 Controller IO queue size 128, less than required. 00:20:44.898 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:44.898 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:44.898 Controller IO queue size 128, less than required. 00:20:44.898 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:44.898 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:44.898 WARNING: Some requested NVMe devices were skipped 00:20:44.898 22:44:28 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:47.427 Initializing NVMe Controllers 00:20:47.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.427 Controller IO queue size 128, less than required. 00:20:47.427 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.427 Controller IO queue size 128, less than required. 00:20:47.427 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:47.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:47.427 Initialization complete. Launching workers. 00:20:47.427 00:20:47.427 ==================== 00:20:47.427 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:47.427 TCP transport: 00:20:47.427 polls: 25091 00:20:47.427 idle_polls: 9282 00:20:47.427 sock_completions: 15809 00:20:47.427 nvme_completions: 3061 00:20:47.427 submitted_requests: 4614 00:20:47.427 queued_requests: 1 00:20:47.427 00:20:47.427 ==================== 00:20:47.427 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:47.427 TCP transport: 00:20:47.427 polls: 27070 00:20:47.427 idle_polls: 11248 00:20:47.427 sock_completions: 15822 00:20:47.427 nvme_completions: 3693 00:20:47.427 submitted_requests: 5478 00:20:47.427 queued_requests: 1 00:20:47.427 ======================================================== 00:20:47.427 Latency(us) 00:20:47.427 Device Information : IOPS MiB/s Average min max 00:20:47.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 764.50 191.12 175227.82 85931.73 270559.50 00:20:47.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 922.39 230.60 142917.33 63139.44 209340.89 00:20:47.427 ======================================================== 00:20:47.427 Total : 1686.89 421.72 157560.41 63139.44 270559.50 00:20:47.427 00:20:47.684 22:44:30 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:47.684 22:44:30 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:47.942 rmmod nvme_tcp 00:20:47.942 rmmod nvme_fabrics 00:20:47.942 rmmod nvme_keyring 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1314133 ']' 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1314133 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@942 -- # '[' -z 1314133 ']' 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # kill -0 1314133 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # uname 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1314133 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1314133' 00:20:47.942 killing process with pid 1314133 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@961 -- # kill 1314133 00:20:47.942 22:44:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # wait 1314133 00:20:49.844 22:44:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.844 22:44:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.844 22:44:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.844 22:44:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.844 22:44:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.845 22:44:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.845 22:44:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.845 22:44:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.743 22:44:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:51.743 00:20:51.743 real 0m21.705s 00:20:51.743 user 1m8.379s 00:20:51.743 sys 0m4.889s 00:20:51.743 22:44:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:51.743 22:44:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:51.743 ************************************ 00:20:51.743 END TEST nvmf_perf 00:20:51.743 ************************************ 00:20:51.743 22:44:34 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:51.743 22:44:34 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:51.743 22:44:34 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:51.743 22:44:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:51.743 22:44:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:51.743 ************************************ 00:20:51.743 START TEST nvmf_fio_host 00:20:51.743 ************************************ 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:51.743 * Looking for test storage... 00:20:51.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.743 22:44:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.744 22:44:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:53.641 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:53.641 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:53.641 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:53.642 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:53.642 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.642 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:20:53.900 00:20:53.900 --- 10.0.0.2 ping statistics --- 00:20:53.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.900 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:20:53.900 00:20:53.900 --- 10.0.0.1 ping statistics --- 00:20:53.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.900 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1318117 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1318117 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@823 -- # '[' -z 1318117 ']' 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:53.900 22:44:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 [2024-07-15 22:44:37.318267] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:20:53.900 [2024-07-15 22:44:37.318361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.900 [2024-07-15 22:44:37.387539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:54.158 [2024-07-15 22:44:37.504314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.158 [2024-07-15 22:44:37.504379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.158 [2024-07-15 22:44:37.504395] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.158 [2024-07-15 22:44:37.504408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.158 [2024-07-15 22:44:37.504419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.158 [2024-07-15 22:44:37.504543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.158 [2024-07-15 22:44:37.504629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.158 [2024-07-15 22:44:37.504723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.158 [2024-07-15 22:44:37.504726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.088 22:44:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:55.089 22:44:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # return 0 00:20:55.089 22:44:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:55.089 [2024-07-15 22:44:38.487495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.089 22:44:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:55.089 22:44:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.089 22:44:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.089 22:44:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:55.346 Malloc1 00:20:55.346 22:44:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:55.603 22:44:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:55.860 22:44:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.117 [2024-07-15 22:44:39.518428] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.117 22:44:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:56.374 22:44:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local sanitizers 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # shift 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local asan_lib= 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libasan 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:56.375 22:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:56.632 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:56.632 fio-3.35 00:20:56.632 Starting 1 thread 00:20:59.192 00:20:59.192 test: (groupid=0, jobs=1): err= 0: pid=1318590: Mon Jul 15 22:44:42 2024 00:20:59.192 read: IOPS=7032, BW=27.5MiB/s (28.8MB/s)(55.1MiB/2007msec) 00:20:59.192 slat (nsec): min=1902, max=147074, avg=2594.64, stdev=1952.79 00:20:59.192 clat (usec): min=4605, max=16122, avg=9940.41, stdev=960.87 00:20:59.192 lat (usec): min=4633, max=16125, avg=9943.00, stdev=960.83 00:20:59.192 clat percentiles (usec): 00:20:59.192 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:20:59.192 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:20:59.192 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:20:59.192 | 99.00th=[12256], 99.50th=[12518], 99.90th=[14222], 99.95th=[15008], 00:20:59.192 | 99.99th=[16057] 00:20:59.192 bw ( KiB/s): min=26112, max=29232, per=99.83%, avg=28082.00, stdev=1359.35, samples=4 00:20:59.192 iops : min= 6528, max= 7308, avg=7020.50, stdev=339.84, samples=4 00:20:59.192 write: IOPS=7035, BW=27.5MiB/s (28.8MB/s)(55.2MiB/2007msec); 0 zone resets 00:20:59.192 slat (usec): min=2, max=133, avg= 2.73, stdev= 1.63 00:20:59.192 clat (usec): min=2644, max=14113, avg=8162.39, stdev=862.28 00:20:59.192 lat (usec): min=2653, max=14115, avg=8165.12, stdev=862.28 00:20:59.192 clat percentiles (usec): 00:20:59.192 | 1.00th=[ 6259], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 7439], 00:20:59.192 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8356], 00:20:59.192 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9503], 00:20:59.192 | 99.00th=[10028], 99.50th=[10421], 99.90th=[11731], 99.95th=[12780], 00:20:59.192 | 99.99th=[14091] 00:20:59.192 bw ( KiB/s): min=27224, max=29312, per=99.97%, avg=28134.00, stdev=912.78, samples=4 00:20:59.192 iops : min= 6806, max= 7328, avg=7033.50, stdev=228.20, samples=4 00:20:59.192 lat (msec) : 4=0.04%, 10=75.71%, 20=24.25% 00:20:59.192 cpu : usr=53.69%, sys=40.28%, ctx=77, majf=0, minf=39 00:20:59.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:59.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:59.192 issued rwts: total=14114,14121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:59.192 00:20:59.192 Run status group 0 (all jobs): 00:20:59.192 READ: bw=27.5MiB/s (28.8MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=55.1MiB (57.8MB), run=2007-2007msec 00:20:59.192 WRITE: bw=27.5MiB/s (28.8MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=55.2MiB (57.8MB), run=2007-2007msec 00:20:59.192 22:44:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local sanitizers 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # shift 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local asan_lib= 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libasan 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:59.193 22:44:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:59.193 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:59.193 fio-3.35 00:20:59.193 Starting 1 thread 00:21:01.722 00:21:01.722 test: (groupid=0, jobs=1): err= 0: pid=1318923: Mon Jul 15 22:44:44 2024 00:21:01.722 read: IOPS=8195, BW=128MiB/s (134MB/s)(257MiB/2005msec) 00:21:01.722 slat (nsec): min=2893, max=94041, avg=3763.07, stdev=1565.53 00:21:01.722 clat (usec): min=3227, max=17281, avg=9397.97, stdev=2295.61 00:21:01.722 lat (usec): min=3230, max=17284, avg=9401.73, stdev=2295.65 00:21:01.722 clat percentiles (usec): 00:21:01.722 | 1.00th=[ 4817], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 7373], 00:21:01.722 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10028], 00:21:01.722 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12125], 95.00th=[13304], 00:21:01.722 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16188], 99.95th=[16450], 00:21:01.722 | 99.99th=[16909] 00:21:01.722 bw ( KiB/s): min=59648, max=74400, per=51.27%, avg=67224.00, stdev=7146.70, samples=4 00:21:01.722 iops : min= 3728, max= 4650, avg=4201.50, stdev=446.67, samples=4 00:21:01.722 write: IOPS=4875, BW=76.2MiB/s (79.9MB/s)(138MiB/1810msec); 0 zone resets 00:21:01.722 slat (usec): min=30, max=205, avg=33.99, stdev= 5.60 00:21:01.722 clat (usec): min=4406, max=19837, avg=10945.22, stdev=2063.10 00:21:01.722 lat (usec): min=4438, max=19870, avg=10979.20, stdev=2063.77 00:21:01.722 clat percentiles (usec): 00:21:01.722 | 1.00th=[ 7111], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9110], 00:21:01.722 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10683], 60.00th=[11338], 00:21:01.722 | 70.00th=[11994], 80.00th=[12780], 90.00th=[13829], 95.00th=[14615], 00:21:01.722 | 99.00th=[16319], 99.50th=[16712], 99.90th=[19530], 99.95th=[19530], 00:21:01.722 | 99.99th=[19792] 00:21:01.722 bw ( KiB/s): min=62496, max=77568, per=89.93%, avg=70152.00, stdev=7620.77, samples=4 00:21:01.722 iops : min= 3906, max= 4848, avg=4384.50, stdev=476.30, samples=4 00:21:01.722 lat (msec) : 4=0.10%, 10=51.13%, 20=48.77% 00:21:01.722 cpu : usr=75.45%, sys=20.76%, ctx=19, majf=0, minf=67 00:21:01.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:01.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:01.722 issued rwts: total=16431,8825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:01.722 00:21:01.722 Run status group 0 (all jobs): 00:21:01.722 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (269MB), run=2005-2005msec 00:21:01.722 WRITE: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=138MiB (145MB), run=1810-1810msec 00:21:01.722 22:44:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.722 22:44:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:01.722 22:44:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:01.722 22:44:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:01.722 22:44:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:01.722 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:01.722 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:01.722 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:01.722 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:01.722 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:01.722 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:01.722 rmmod nvme_tcp 00:21:01.980 rmmod nvme_fabrics 00:21:01.980 rmmod nvme_keyring 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1318117 ']' 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1318117 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@942 -- # '[' -z 1318117 ']' 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # kill -0 1318117 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # uname 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1318117 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1318117' 00:21:01.980 killing process with pid 1318117 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@961 -- # kill 1318117 00:21:01.980 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # wait 1318117 00:21:02.238 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:02.239 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:02.239 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:02.239 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.239 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:02.239 22:44:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.239 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.239 22:44:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.153 22:44:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:04.153 00:21:04.153 real 0m12.613s 00:21:04.153 user 0m37.163s 00:21:04.153 sys 0m4.304s 00:21:04.153 22:44:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:04.153 22:44:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.153 ************************************ 00:21:04.153 END TEST nvmf_fio_host 00:21:04.153 ************************************ 00:21:04.153 22:44:47 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:04.153 22:44:47 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:04.153 22:44:47 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:04.153 22:44:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:04.153 22:44:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:04.411 ************************************ 00:21:04.411 START TEST nvmf_failover 00:21:04.411 ************************************ 00:21:04.411 22:44:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:04.411 * Looking for test storage... 00:21:04.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:04.411 22:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.411 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:04.411 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.411 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:04.412 22:44:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:06.334 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:06.334 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:06.334 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:06.334 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:06.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:21:06.334 00:21:06.334 --- 10.0.0.2 ping statistics --- 00:21:06.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.334 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:21:06.334 00:21:06.334 --- 10.0.0.1 ping statistics --- 00:21:06.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.334 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:06.334 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1321228 00:21:06.335 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:06.335 22:44:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1321228 00:21:06.335 22:44:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1321228 ']' 00:21:06.335 22:44:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.335 22:44:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:06.335 22:44:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.335 22:44:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:06.335 22:44:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:06.593 [2024-07-15 22:44:49.876755] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:21:06.593 [2024-07-15 22:44:49.876855] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.593 [2024-07-15 22:44:49.945429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:06.593 [2024-07-15 22:44:50.061769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.593 [2024-07-15 22:44:50.061838] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.593 [2024-07-15 22:44:50.061863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.593 [2024-07-15 22:44:50.061883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.593 [2024-07-15 22:44:50.061897] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.593 [2024-07-15 22:44:50.061981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.593 [2024-07-15 22:44:50.062096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.593 [2024-07-15 22:44:50.062099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.525 22:44:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:07.525 22:44:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:21:07.525 22:44:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:07.525 22:44:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.525 22:44:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:07.525 22:44:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.525 22:44:50 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:07.805 [2024-07-15 22:44:51.075839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.805 22:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:08.063 Malloc0 00:21:08.063 22:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.320 22:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:08.577 22:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.835 [2024-07-15 22:44:52.101917] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.835 22:44:52 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:09.093 [2024-07-15 22:44:52.350747] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:09.093 22:44:52 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:09.093 [2024-07-15 22:44:52.591531] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:09.351 22:44:52 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1321527 00:21:09.351 22:44:52 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:09.351 22:44:52 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.351 22:44:52 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1321527 /var/tmp/bdevperf.sock 00:21:09.351 22:44:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1321527 ']' 00:21:09.351 22:44:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.351 22:44:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:09.351 22:44:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.351 22:44:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:09.351 22:44:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:09.608 22:44:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:09.608 22:44:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:21:09.608 22:44:52 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.172 NVMe0n1 00:21:10.172 22:44:53 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.429 00:21:10.429 22:44:53 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1321661 00:21:10.429 22:44:53 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:10.429 22:44:53 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:11.361 22:44:54 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.620 [2024-07-15 22:44:54.963377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.963989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 [2024-07-15 22:44:54.964120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267070 is same with the state(5) to be set 00:21:11.620 22:44:54 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:14.949 22:44:57 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:14.949 00:21:14.949 22:44:58 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:15.208 [2024-07-15 22:44:58.551077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 [2024-07-15 22:44:58.551972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268620 is same with the state(5) to be set 00:21:15.208 22:44:58 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:18.489 22:45:01 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.489 [2024-07-15 22:45:01.810204] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.489 22:45:01 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:19.423 22:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:19.680 [2024-07-15 22:45:03.112239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112387] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 [2024-07-15 22:45:03.112675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268e30 is same with the state(5) to be set 00:21:19.680 22:45:03 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1321661 00:21:26.261 0 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1321527 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1321527 ']' 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1321527 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1321527 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1321527' 00:21:26.261 killing process with pid 1321527 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1321527 00:21:26.261 22:45:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1321527 00:21:26.261 22:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:26.261 [2024-07-15 22:44:52.655189] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:21:26.261 [2024-07-15 22:44:52.655287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321527 ] 00:21:26.261 [2024-07-15 22:44:52.714974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.261 [2024-07-15 22:44:52.829045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.261 Running I/O for 15 seconds... 00:21:26.261 [2024-07-15 22:44:54.966483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.261 [2024-07-15 22:44:54.966530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.261 [2024-07-15 22:44:54.966573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.261 [2024-07-15 22:44:54.966603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.261 [2024-07-15 22:44:54.966633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.261 [2024-07-15 22:44:54.966663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.966692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.966723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.966753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.966782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.966810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.966841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.966904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.966963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.966979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.966994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.967010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.967024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.967039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.967053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.967069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.261 [2024-07-15 22:44:54.967083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.261 [2024-07-15 22:44:54.967098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.967977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.967992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.262 [2024-07-15 22:44:54.968401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.262 [2024-07-15 22:44:54.968416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.263 [2024-07-15 22:44:54.968863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.263 [2024-07-15 22:44:54.968928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.263 [2024-07-15 22:44:54.968959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.968974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.968988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.969018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.969047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.969077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.969106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.969136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.969165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.969218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.263 [2024-07-15 22:44:54.969246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.263 [2024-07-15 22:44:54.969296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81456 len:8 PRP1 0x0 PRP2 0x0 00:21:26.263 [2024-07-15 22:44:54.969310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.263 [2024-07-15 22:44:54.969394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.263 [2024-07-15 22:44:54.969424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.263 [2024-07-15 22:44:54.969451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.263 [2024-07-15 22:44:54.969478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc0f0 is same with the state(5) to be set 00:21:26.263 [2024-07-15 22:44:54.969716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.263 [2024-07-15 22:44:54.969737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.263 [2024-07-15 22:44:54.969749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81464 len:8 PRP1 0x0 PRP2 0x0 00:21:26.263 [2024-07-15 22:44:54.969761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.263 [2024-07-15 22:44:54.969789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.263 [2024-07-15 22:44:54.969800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81472 len:8 PRP1 0x0 PRP2 0x0 00:21:26.263 [2024-07-15 22:44:54.969813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.263 [2024-07-15 22:44:54.969841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.263 [2024-07-15 22:44:54.969853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81480 len:8 PRP1 0x0 PRP2 0x0 00:21:26.263 [2024-07-15 22:44:54.969866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.263 [2024-07-15 22:44:54.969917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.263 [2024-07-15 22:44:54.969937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81488 len:8 PRP1 0x0 PRP2 0x0 00:21:26.263 [2024-07-15 22:44:54.969950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.969968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.263 [2024-07-15 22:44:54.969980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.263 [2024-07-15 22:44:54.970002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81496 len:8 PRP1 0x0 PRP2 0x0 00:21:26.263 [2024-07-15 22:44:54.970015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.263 [2024-07-15 22:44:54.970029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.263 [2024-07-15 22:44:54.970040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.263 [2024-07-15 22:44:54.970051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81504 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81512 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81520 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81528 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81536 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81544 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81552 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81560 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81568 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81576 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81584 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81592 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81600 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81608 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81616 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81624 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81632 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.970958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.970970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81640 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.970983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.970996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.971007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.971018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81648 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.971031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.971044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.971055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.971066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81656 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.971079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.971092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.971103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.971114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81664 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.971127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.971141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.971153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.971164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81672 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.971191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.971205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.971219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.971231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81680 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.971255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.971268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.971279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.971290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81688 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.971302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.971315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.971326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.971338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81696 len:8 PRP1 0x0 PRP2 0x0 00:21:26.264 [2024-07-15 22:44:54.971350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.264 [2024-07-15 22:44:54.971363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.264 [2024-07-15 22:44:54.971375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.264 [2024-07-15 22:44:54.971387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81704 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81712 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81720 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81728 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81736 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81744 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81752 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81760 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81768 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81776 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80768 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.971962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.971975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.971987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.971998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80776 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.972018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.972032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.972043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.972055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80784 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.984385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.984418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.984432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.984444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80792 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.984457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.984471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.984482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.984493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80824 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.984506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.984519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.984530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.984542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80832 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.984554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.984569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.984591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.984602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80840 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.984615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.984628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.984639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.984661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80848 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.984674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.984687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.984698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.984709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:21:26.265 [2024-07-15 22:44:54.984722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.265 [2024-07-15 22:44:54.984736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.265 [2024-07-15 22:44:54.984747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.265 [2024-07-15 22:44:54.984764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80864 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.984777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.984791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.984816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.984828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.984841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.984854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.984865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.984904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.984922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.984936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.984948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.984959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.984973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.984986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.984997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80920 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80928 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80944 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80952 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80960 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80968 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80976 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80984 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80992 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81000 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81008 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81016 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81024 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81032 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81040 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.985958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.985971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.985983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.985998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81048 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.986013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.986027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.986038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.986050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81056 len:8 PRP1 0x0 PRP2 0x0 00:21:26.266 [2024-07-15 22:44:54.986063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.266 [2024-07-15 22:44:54.986076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.266 [2024-07-15 22:44:54.986087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.266 [2024-07-15 22:44:54.986099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81064 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81072 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81080 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81088 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81096 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81104 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81112 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81120 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81128 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81136 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81144 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81152 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81160 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81168 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81176 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81184 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81192 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.986957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.986970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.986982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81200 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.986996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.987010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.987021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.987033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81208 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.987048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.987061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.987073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.987085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81216 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.987099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.987113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.987124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.987136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81224 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.987150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.987164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.987175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.987202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81232 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.987218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.987232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.987243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.987254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81240 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.987266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.987278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.987289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.987300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81248 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.987312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.987325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.987335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.987346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81256 len:8 PRP1 0x0 PRP2 0x0 00:21:26.267 [2024-07-15 22:44:54.987358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.267 [2024-07-15 22:44:54.987371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.267 [2024-07-15 22:44:54.987381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.267 [2024-07-15 22:44:54.987392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81264 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81272 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81280 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81288 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81296 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81304 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81312 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81320 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81328 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81336 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81344 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.987947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.987959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.987971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81352 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.987984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.988001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.988014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.988025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81360 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.988039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.988052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.988063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.988075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81368 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.988088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.988108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.988120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.988132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81376 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.988145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.988173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.988185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.988196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80800 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.988209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.988226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.998869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.998931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80808 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.998948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.998964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.998976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.998988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80816 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.999001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.999014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.999026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.999037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81384 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.999050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.999064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.999075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.999087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81392 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.999106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.999120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.999131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.999143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81400 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.999156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.999177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.999188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.999214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81408 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.999228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.999243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.999255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.999282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81416 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.999294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.999307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.999317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.999328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81424 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.999341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.999353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.999363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.999374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81432 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.999387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.268 [2024-07-15 22:44:54.999399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.268 [2024-07-15 22:44:54.999410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.268 [2024-07-15 22:44:54.999421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81440 len:8 PRP1 0x0 PRP2 0x0 00:21:26.268 [2024-07-15 22:44:54.999433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:54.999446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.269 [2024-07-15 22:44:54.999456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.269 [2024-07-15 22:44:54.999467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81448 len:8 PRP1 0x0 PRP2 0x0 00:21:26.269 [2024-07-15 22:44:54.999479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:54.999492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.269 [2024-07-15 22:44:54.999502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.269 [2024-07-15 22:44:54.999516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81456 len:8 PRP1 0x0 PRP2 0x0 00:21:26.269 [2024-07-15 22:44:54.999529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:54.999594] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1402390 was disconnected and freed. reset controller. 00:21:26.269 [2024-07-15 22:44:54.999611] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:26.269 [2024-07-15 22:44:54.999626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.269 [2024-07-15 22:44:54.999686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dc0f0 (9): Bad file descriptor 00:21:26.269 [2024-07-15 22:44:55.002981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.269 [2024-07-15 22:44:55.152058] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:26.269 [2024-07-15 22:44:58.552337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.552980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.552994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.269 [2024-07-15 22:44:58.553354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.269 [2024-07-15 22:44:58.553368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.553981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.553997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.270 [2024-07-15 22:44:58.554403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.270 [2024-07-15 22:44:58.554721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.270 [2024-07-15 22:44:58.554734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.554749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.554762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.554776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.554790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.554804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.554823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.554838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.554852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.554866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.554901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.554920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.554934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.554949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.554969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.554985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.554998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.555972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.555986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.556001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.556016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.556031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.271 [2024-07-15 22:44:58.556046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.271 [2024-07-15 22:44:58.556061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.272 [2024-07-15 22:44:58.556075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.272 [2024-07-15 22:44:58.556105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.272 [2024-07-15 22:44:58.556134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.272 [2024-07-15 22:44:58.556164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.272 [2024-07-15 22:44:58.556209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.272 [2024-07-15 22:44:58.556238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.272 [2024-07-15 22:44:58.556271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.272 [2024-07-15 22:44:58.556300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:44:58.556328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:44:58.556358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:44:58.556386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.272 [2024-07-15 22:44:58.556435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.272 [2024-07-15 22:44:58.556447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103032 len:8 PRP1 0x0 PRP2 0x0 00:21:26.272 [2024-07-15 22:44:58.556460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556526] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15a6d80 was disconnected and freed. reset controller. 00:21:26.272 [2024-07-15 22:44:58.556545] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:26.272 [2024-07-15 22:44:58.556593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.272 [2024-07-15 22:44:58.556612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.272 [2024-07-15 22:44:58.556642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.272 [2024-07-15 22:44:58.556670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.272 [2024-07-15 22:44:58.556697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:44:58.556711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.272 [2024-07-15 22:44:58.559997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.272 [2024-07-15 22:44:58.560038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dc0f0 (9): Bad file descriptor 00:21:26.272 [2024-07-15 22:44:58.597926] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:26.272 [2024-07-15 22:45:03.114661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.114714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.114752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.114767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.114782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.114796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.114812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.114826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.114841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.114870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.114908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.114940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.114958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.114973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.114989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.272 [2024-07-15 22:45:03.115510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.272 [2024-07-15 22:45:03.115524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.273 [2024-07-15 22:45:03.115553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.273 [2024-07-15 22:45:03.115585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.273 [2024-07-15 22:45:03.115615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.273 [2024-07-15 22:45:03.115644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.273 [2024-07-15 22:45:03.115673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.273 [2024-07-15 22:45:03.115701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.273 [2024-07-15 22:45:03.115730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.115759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.115787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.115816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.115846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.115886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.115936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.115965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.115984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.273 [2024-07-15 22:45:03.116800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.273 [2024-07-15 22:45:03.116814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.116828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.116843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.116857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.116900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.116918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.116934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.116948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.116963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.116978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.116993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.117007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.117038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.117067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.117097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.117126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.117160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.117211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.274 [2024-07-15 22:45:03.117242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40736 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40744 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40752 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40760 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40768 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40776 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40784 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40792 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40800 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40808 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40816 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40824 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40832 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.117950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.117961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.117973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40840 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.117990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.118004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.274 [2024-07-15 22:45:03.118015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.274 [2024-07-15 22:45:03.118026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40848 len:8 PRP1 0x0 PRP2 0x0 00:21:26.274 [2024-07-15 22:45:03.118039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.274 [2024-07-15 22:45:03.118052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40856 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40864 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40872 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40880 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40888 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40896 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40904 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40912 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40920 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40928 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40936 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40944 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40952 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40968 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40976 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40984 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.118954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.118965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.118977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40992 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.118989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.119003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.119014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.119025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41000 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.119038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.119051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.119062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.119074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41008 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.119087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.119100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.119112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.119124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41016 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.119137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.119150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.119162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.119178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41024 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.119220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.119235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.119246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.119258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41032 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.119270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.119283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.119293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.119304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41040 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.119324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.119337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.275 [2024-07-15 22:45:03.119348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.275 [2024-07-15 22:45:03.119358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41048 len:8 PRP1 0x0 PRP2 0x0 00:21:26.275 [2024-07-15 22:45:03.119370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.275 [2024-07-15 22:45:03.119383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.276 [2024-07-15 22:45:03.119394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.276 [2024-07-15 22:45:03.119404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41056 len:8 PRP1 0x0 PRP2 0x0 00:21:26.276 [2024-07-15 22:45:03.119416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.119429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.276 [2024-07-15 22:45:03.119439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.276 [2024-07-15 22:45:03.119450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41064 len:8 PRP1 0x0 PRP2 0x0 00:21:26.276 [2024-07-15 22:45:03.119462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.119474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.276 [2024-07-15 22:45:03.119485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.276 [2024-07-15 22:45:03.119496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41072 len:8 PRP1 0x0 PRP2 0x0 00:21:26.276 [2024-07-15 22:45:03.119508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.119520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.276 [2024-07-15 22:45:03.119531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.276 [2024-07-15 22:45:03.119542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41080 len:8 PRP1 0x0 PRP2 0x0 00:21:26.276 [2024-07-15 22:45:03.119554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.119567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.276 [2024-07-15 22:45:03.119577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.276 [2024-07-15 22:45:03.119592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41088 len:8 PRP1 0x0 PRP2 0x0 00:21:26.276 [2024-07-15 22:45:03.119605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.119618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.276 [2024-07-15 22:45:03.119629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.276 [2024-07-15 22:45:03.119640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41096 len:8 PRP1 0x0 PRP2 0x0 00:21:26.276 [2024-07-15 22:45:03.119653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.119666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.276 [2024-07-15 22:45:03.119677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.276 [2024-07-15 22:45:03.119688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41104 len:8 PRP1 0x0 PRP2 0x0 00:21:26.276 [2024-07-15 22:45:03.119701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.119715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.276 [2024-07-15 22:45:03.119726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.276 [2024-07-15 22:45:03.119737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40336 len:8 PRP1 0x0 PRP2 0x0 00:21:26.276 [2024-07-15 22:45:03.119749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.119762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.276 [2024-07-15 22:45:03.119773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.276 [2024-07-15 22:45:03.119784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40344 len:8 PRP1 0x0 PRP2 0x0 00:21:26.276 [2024-07-15 22:45:03.119797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.119857] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15a6ca0 was disconnected and freed. reset controller. 00:21:26.276 [2024-07-15 22:45:03.119907] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:26.276 [2024-07-15 22:45:03.119944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.276 [2024-07-15 22:45:03.119963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.119979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.276 [2024-07-15 22:45:03.119992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.120006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.276 [2024-07-15 22:45:03.120020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.120034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.276 [2024-07-15 22:45:03.120047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.276 [2024-07-15 22:45:03.120061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.276 [2024-07-15 22:45:03.120104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dc0f0 (9): Bad file descriptor 00:21:26.276 [2024-07-15 22:45:03.123379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.276 [2024-07-15 22:45:03.195783] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:26.276 00:21:26.276 Latency(us) 00:21:26.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.276 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:26.276 Verification LBA range: start 0x0 length 0x4000 00:21:26.276 NVMe0n1 : 15.01 8671.36 33.87 668.91 0.00 13676.54 801.00 40583.77 00:21:26.276 =================================================================================================================== 00:21:26.276 Total : 8671.36 33.87 668.91 0.00 13676.54 801.00 40583.77 00:21:26.276 Received shutdown signal, test time was about 15.000000 seconds 00:21:26.276 00:21:26.276 Latency(us) 00:21:26.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.276 =================================================================================================================== 00:21:26.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1324135 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1324135 /var/tmp/bdevperf.sock 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1324135 ']' 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:26.276 [2024-07-15 22:45:09.720268] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:26.276 22:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:26.534 [2024-07-15 22:45:09.981030] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:26.534 22:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.100 NVMe0n1 00:21:27.100 22:45:10 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.664 00:21:27.664 22:45:10 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.922 00:21:27.922 22:45:11 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.922 22:45:11 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:28.180 22:45:11 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.437 22:45:11 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:31.715 22:45:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.715 22:45:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:31.715 22:45:15 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1324807 00:21:31.715 22:45:15 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1324807 00:21:31.715 22:45:15 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.647 0 00:21:32.905 22:45:16 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:32.905 [2024-07-15 22:45:09.212134] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:21:32.905 [2024-07-15 22:45:09.212225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324135 ] 00:21:32.905 [2024-07-15 22:45:09.273807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.905 [2024-07-15 22:45:09.385103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.905 [2024-07-15 22:45:11.736168] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:32.905 [2024-07-15 22:45:11.736278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.905 [2024-07-15 22:45:11.736302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.905 [2024-07-15 22:45:11.736320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.905 [2024-07-15 22:45:11.736334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.905 [2024-07-15 22:45:11.736348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.905 [2024-07-15 22:45:11.736362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.905 [2024-07-15 22:45:11.736376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.905 [2024-07-15 22:45:11.736390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.905 [2024-07-15 22:45:11.736404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:32.905 [2024-07-15 22:45:11.736451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:32.905 [2024-07-15 22:45:11.736485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24690f0 (9): Bad file descriptor 00:21:32.905 [2024-07-15 22:45:11.745901] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:32.905 Running I/O for 1 seconds... 00:21:32.905 00:21:32.905 Latency(us) 00:21:32.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.905 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:32.905 Verification LBA range: start 0x0 length 0x4000 00:21:32.905 NVMe0n1 : 1.01 8811.47 34.42 0.00 0.00 14449.73 2852.03 12815.93 00:21:32.905 =================================================================================================================== 00:21:32.905 Total : 8811.47 34.42 0.00 0.00 14449.73 2852.03 12815.93 00:21:32.905 22:45:16 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:32.905 22:45:16 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:32.905 22:45:16 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:33.163 22:45:16 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:33.163 22:45:16 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:33.420 22:45:16 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:33.677 22:45:17 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1324135 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1324135 ']' 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1324135 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1324135 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1324135' 00:21:36.997 killing process with pid 1324135 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1324135 00:21:36.997 22:45:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1324135 00:21:37.255 22:45:20 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:37.255 22:45:20 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:37.819 22:45:21 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:37.820 rmmod nvme_tcp 00:21:37.820 rmmod nvme_fabrics 00:21:37.820 rmmod nvme_keyring 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1321228 ']' 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1321228 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1321228 ']' 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1321228 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1321228 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1321228' 00:21:37.820 killing process with pid 1321228 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1321228 00:21:37.820 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1321228 00:21:38.078 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:38.078 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:38.078 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:38.078 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.078 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:38.078 22:45:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.078 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.078 22:45:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.980 22:45:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:39.980 00:21:39.980 real 0m35.744s 00:21:39.980 user 2m5.557s 00:21:39.980 sys 0m5.955s 00:21:39.980 22:45:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:39.980 22:45:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:39.980 ************************************ 00:21:39.980 END TEST nvmf_failover 00:21:39.980 ************************************ 00:21:39.980 22:45:23 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:39.980 22:45:23 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:39.980 22:45:23 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:39.980 22:45:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:39.980 22:45:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:39.980 ************************************ 00:21:39.980 START TEST nvmf_host_discovery 00:21:39.980 ************************************ 00:21:39.980 22:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:40.238 * Looking for test storage... 00:21:40.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:40.238 22:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:42.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:42.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:42.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:42.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:42.137 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:21:42.138 00:21:42.138 --- 10.0.0.2 ping statistics --- 00:21:42.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.138 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:21:42.138 00:21:42.138 --- 10.0.0.1 ping statistics --- 00:21:42.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.138 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1327412 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1327412 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@823 -- # '[' -z 1327412 ']' 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:42.138 22:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.395 [2024-07-15 22:45:25.644753] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:21:42.395 [2024-07-15 22:45:25.644853] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.395 [2024-07-15 22:45:25.713841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.395 [2024-07-15 22:45:25.832930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.395 [2024-07-15 22:45:25.833009] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.395 [2024-07-15 22:45:25.833033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.395 [2024-07-15 22:45:25.833048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.395 [2024-07-15 22:45:25.833068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.395 [2024-07-15 22:45:25.833099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # return 0 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.325 [2024-07-15 22:45:26.615959] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.325 [2024-07-15 22:45:26.624114] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.325 null0 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.325 null1 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1327563 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1327563 /tmp/host.sock 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@823 -- # '[' -z 1327563 ']' 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/tmp/host.sock 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:43.325 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:43.325 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.325 [2024-07-15 22:45:26.702254] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:21:43.325 [2024-07-15 22:45:26.702355] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327563 ] 00:21:43.325 [2024-07-15 22:45:26.764617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.582 [2024-07-15 22:45:26.876365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.582 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:43.582 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # return 0 00:21:43.582 22:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.582 22:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:43.582 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.582 22:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.582 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.839 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.840 [2024-07-15 22:45:27.285873] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.840 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == \n\v\m\e\0 ]] 00:21:44.098 22:45:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # sleep 1 00:21:44.664 [2024-07-15 22:45:28.066102] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:44.664 [2024-07-15 22:45:28.066132] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:44.664 [2024-07-15 22:45:28.066173] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:44.664 [2024-07-15 22:45:28.152464] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:44.922 [2024-07-15 22:45:28.215397] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:44.922 [2024-07-15 22:45:28.215422] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 == \4\4\2\0 ]] 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.180 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.181 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.439 [2024-07-15 22:45:28.894506] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:45.439 [2024-07-15 22:45:28.895619] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:45.439 [2024-07-15 22:45:28.895657] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:45.439 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:45.697 [2024-07-15 22:45:28.982431] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:45.697 22:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:45.697 22:45:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:45.697 22:45:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # sleep 1 00:21:45.956 [2024-07-15 22:45:29.248745] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:45.956 [2024-07-15 22:45:29.248770] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:45.956 [2024-07-15 22:45:29.248781] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:46.915 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.916 [2024-07-15 22:45:30.114589] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:46.916 [2024-07-15 22:45:30.114640] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:46.916 [2024-07-15 22:45:30.114715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.916 [2024-07-15 22:45:30.114749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.916 [2024-07-15 22:45:30.114766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.916 [2024-07-15 22:45:30.114780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.916 [2024-07-15 22:45:30.114794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.916 [2024-07-15 22:45:30.114807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.916 [2024-07-15 22:45:30.114822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.916 [2024-07-15 22:45:30.114835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.916 [2024-07-15 22:45:30.114848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04c00 is same with the state(5) to be set 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.916 [2024-07-15 22:45:30.124696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04c00 (9): Bad file descriptor 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.916 [2024-07-15 22:45:30.134744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.916 [2024-07-15 22:45:30.135048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.916 [2024-07-15 22:45:30.135080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a04c00 with addr=10.0.0.2, port=4420 00:21:46.916 [2024-07-15 22:45:30.135098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04c00 is same with the state(5) to be set 00:21:46.916 [2024-07-15 22:45:30.135122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04c00 (9): Bad file descriptor 00:21:46.916 [2024-07-15 22:45:30.135168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.916 [2024-07-15 22:45:30.135186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.916 [2024-07-15 22:45:30.135205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.916 [2024-07-15 22:45:30.135227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.916 [2024-07-15 22:45:30.144833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.916 [2024-07-15 22:45:30.145109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.916 [2024-07-15 22:45:30.145138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a04c00 with addr=10.0.0.2, port=4420 00:21:46.916 [2024-07-15 22:45:30.145169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04c00 is same with the state(5) to be set 00:21:46.916 [2024-07-15 22:45:30.145195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04c00 (9): Bad file descriptor 00:21:46.916 [2024-07-15 22:45:30.145218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.916 [2024-07-15 22:45:30.145235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.916 [2024-07-15 22:45:30.145250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.916 [2024-07-15 22:45:30.145271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.916 [2024-07-15 22:45:30.154929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.916 [2024-07-15 22:45:30.155235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.916 [2024-07-15 22:45:30.155266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a04c00 with addr=10.0.0.2, port=4420 00:21:46.916 [2024-07-15 22:45:30.155284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04c00 is same with the state(5) to be set 00:21:46.916 [2024-07-15 22:45:30.155308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04c00 (9): Bad file descriptor 00:21:46.916 [2024-07-15 22:45:30.155332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.916 [2024-07-15 22:45:30.155348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.916 [2024-07-15 22:45:30.155363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.916 [2024-07-15 22:45:30.155399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:46.916 [2024-07-15 22:45:30.165000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.916 [2024-07-15 22:45:30.165304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.916 [2024-07-15 22:45:30.165337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a04c00 with addr=10.0.0.2, port=4420 00:21:46.916 [2024-07-15 22:45:30.165361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04c00 is same with the state(5) to be set 00:21:46.916 [2024-07-15 22:45:30.165387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04c00 (9): Bad file descriptor 00:21:46.916 [2024-07-15 22:45:30.165410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.916 [2024-07-15 22:45:30.165426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.916 [2024-07-15 22:45:30.165441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.916 [2024-07-15 22:45:30.165461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.916 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.917 [2024-07-15 22:45:30.175073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.917 [2024-07-15 22:45:30.175330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.917 [2024-07-15 22:45:30.175358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a04c00 with addr=10.0.0.2, port=4420 00:21:46.917 [2024-07-15 22:45:30.175374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04c00 is same with the state(5) to be set 00:21:46.917 [2024-07-15 22:45:30.175396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04c00 (9): Bad file descriptor 00:21:46.917 [2024-07-15 22:45:30.175476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.917 [2024-07-15 22:45:30.175512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.917 [2024-07-15 22:45:30.175525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.917 [2024-07-15 22:45:30.175544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.917 [2024-07-15 22:45:30.185144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.917 [2024-07-15 22:45:30.185406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.917 [2024-07-15 22:45:30.185437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a04c00 with addr=10.0.0.2, port=4420 00:21:46.917 [2024-07-15 22:45:30.185455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04c00 is same with the state(5) to be set 00:21:46.917 [2024-07-15 22:45:30.185480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04c00 (9): Bad file descriptor 00:21:46.917 [2024-07-15 22:45:30.185518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.917 [2024-07-15 22:45:30.185539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.917 [2024-07-15 22:45:30.185555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.917 [2024-07-15 22:45:30.185575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.917 [2024-07-15 22:45:30.195231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.917 [2024-07-15 22:45:30.195443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.917 [2024-07-15 22:45:30.195474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a04c00 with addr=10.0.0.2, port=4420 00:21:46.917 [2024-07-15 22:45:30.195493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04c00 is same with the state(5) to be set 00:21:46.917 [2024-07-15 22:45:30.195517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04c00 (9): Bad file descriptor 00:21:46.917 [2024-07-15 22:45:30.195568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.917 [2024-07-15 22:45:30.195591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.917 [2024-07-15 22:45:30.195606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.917 [2024-07-15 22:45:30.195628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.917 [2024-07-15 22:45:30.200579] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:46.917 [2024-07-15 22:45:30.200612] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == '' ]] 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == '' ]] 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:46.917 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:47.175 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:47.175 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:47.175 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:21:47.175 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:21:47.175 22:45:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:47.175 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:47.175 22:45:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.108 [2024-07-15 22:45:31.484978] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:48.108 [2024-07-15 22:45:31.485003] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:48.108 [2024-07-15 22:45:31.485035] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.366 [2024-07-15 22:45:31.612459] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:48.625 [2024-07-15 22:45:31.922633] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:48.625 [2024-07-15 22:45:31.922671] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.625 request: 00:21:48.625 { 00:21:48.625 "name": "nvme", 00:21:48.625 "trtype": "tcp", 00:21:48.625 "traddr": "10.0.0.2", 00:21:48.625 "adrfam": "ipv4", 00:21:48.625 "trsvcid": "8009", 00:21:48.625 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:48.625 "wait_for_attach": true, 00:21:48.625 "method": "bdev_nvme_start_discovery", 00:21:48.625 "req_id": 1 00:21:48.625 } 00:21:48.625 Got JSON-RPC error response 00:21:48.625 response: 00:21:48.625 { 00:21:48.625 "code": -17, 00:21:48.625 "message": "File exists" 00:21:48.625 } 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.625 22:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.625 request: 00:21:48.625 { 00:21:48.625 "name": "nvme_second", 00:21:48.625 "trtype": "tcp", 00:21:48.625 "traddr": "10.0.0.2", 00:21:48.625 "adrfam": "ipv4", 00:21:48.625 "trsvcid": "8009", 00:21:48.625 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:48.625 "wait_for_attach": true, 00:21:48.625 "method": "bdev_nvme_start_discovery", 00:21:48.625 "req_id": 1 00:21:48.625 } 00:21:48.625 Got JSON-RPC error response 00:21:48.625 response: 00:21:48.625 { 00:21:48.625 "code": -17, 00:21:48.625 "message": "File exists" 00:21:48.625 } 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:48.625 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:48.626 22:45:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.998 [2024-07-15 22:45:33.126269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.998 [2024-07-15 22:45:33.126338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1fc90 with addr=10.0.0.2, port=8010 00:21:49.998 [2024-07-15 22:45:33.126387] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:49.998 [2024-07-15 22:45:33.126412] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:49.998 [2024-07-15 22:45:33.126427] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:50.937 [2024-07-15 22:45:34.128589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.937 [2024-07-15 22:45:34.128643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be2540 with addr=10.0.0.2, port=8010 00:21:50.937 [2024-07-15 22:45:34.128673] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:50.937 [2024-07-15 22:45:34.128687] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:50.937 [2024-07-15 22:45:34.128700] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:51.869 [2024-07-15 22:45:35.130724] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:51.869 request: 00:21:51.869 { 00:21:51.869 "name": "nvme_second", 00:21:51.869 "trtype": "tcp", 00:21:51.869 "traddr": "10.0.0.2", 00:21:51.869 "adrfam": "ipv4", 00:21:51.869 "trsvcid": "8010", 00:21:51.869 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:51.869 "wait_for_attach": false, 00:21:51.869 "attach_timeout_ms": 3000, 00:21:51.869 "method": "bdev_nvme_start_discovery", 00:21:51.869 "req_id": 1 00:21:51.869 } 00:21:51.869 Got JSON-RPC error response 00:21:51.869 response: 00:21:51.869 { 00:21:51.869 "code": -110, 00:21:51.869 "message": "Connection timed out" 00:21:51.869 } 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1327563 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:51.869 rmmod nvme_tcp 00:21:51.869 rmmod nvme_fabrics 00:21:51.869 rmmod nvme_keyring 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1327412 ']' 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1327412 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@942 -- # '[' -z 1327412 ']' 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # kill -0 1327412 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # uname 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1327412 00:21:51.869 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:51.870 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:51.870 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1327412' 00:21:51.870 killing process with pid 1327412 00:21:51.870 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@961 -- # kill 1327412 00:21:51.870 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # wait 1327412 00:21:52.128 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.128 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:52.128 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:52.128 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.128 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.128 22:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.128 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.128 22:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.665 00:21:54.665 real 0m14.148s 00:21:54.665 user 0m20.490s 00:21:54.665 sys 0m2.881s 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.665 ************************************ 00:21:54.665 END TEST nvmf_host_discovery 00:21:54.665 ************************************ 00:21:54.665 22:45:37 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:54.665 22:45:37 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:54.665 22:45:37 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:54.665 22:45:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:54.665 22:45:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:54.665 ************************************ 00:21:54.665 START TEST nvmf_host_multipath_status 00:21:54.665 ************************************ 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:54.665 * Looking for test storage... 00:21:54.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.665 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.666 22:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.568 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:56.569 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:56.569 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:56.569 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:56.569 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:21:56.569 00:21:56.569 --- 10.0.0.2 ping statistics --- 00:21:56.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.569 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:21:56.569 00:21:56.569 --- 10.0.0.1 ping statistics --- 00:21:56.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.569 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.569 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1330712 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1330712 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@823 -- # '[' -z 1330712 ']' 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:56.570 22:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.570 [2024-07-15 22:45:39.912357] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:21:56.570 [2024-07-15 22:45:39.912429] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.570 [2024-07-15 22:45:39.974589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:56.828 [2024-07-15 22:45:40.087636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.828 [2024-07-15 22:45:40.087699] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.828 [2024-07-15 22:45:40.087712] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.828 [2024-07-15 22:45:40.087723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.828 [2024-07-15 22:45:40.087732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.828 [2024-07-15 22:45:40.087809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.828 [2024-07-15 22:45:40.087814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.828 22:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:56.828 22:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # return 0 00:21:56.828 22:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.828 22:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.828 22:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.828 22:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.828 22:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1330712 00:21:56.828 22:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:57.087 [2024-07-15 22:45:40.446866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.087 22:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:57.346 Malloc0 00:21:57.346 22:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:57.603 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.861 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.119 [2024-07-15 22:45:41.573216] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.119 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:58.376 [2024-07-15 22:45:41.870029] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:58.634 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1330998 00:21:58.634 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:58.634 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.634 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1330998 /var/tmp/bdevperf.sock 00:21:58.634 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@823 -- # '[' -z 1330998 ']' 00:21:58.634 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.634 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:58.634 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.634 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:58.634 22:45:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:58.891 22:45:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:58.891 22:45:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # return 0 00:21:58.891 22:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:59.148 22:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:59.404 Nvme0n1 00:21:59.661 22:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:00.228 Nvme0n1 00:22:00.228 22:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:00.228 22:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:02.133 22:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:02.133 22:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:02.390 22:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:02.649 22:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:03.586 22:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:03.586 22:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:03.586 22:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.586 22:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:03.843 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.843 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:03.843 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.843 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:04.101 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:04.101 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:04.101 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.101 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:04.359 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.359 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:04.359 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.359 22:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:04.616 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.616 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:04.616 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.616 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:04.874 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.874 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:04.874 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.874 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:05.132 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.132 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:05.132 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:05.391 22:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:05.677 22:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:06.635 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:06.635 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:06.635 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.635 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:06.891 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:06.891 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:06.891 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.891 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:07.147 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.147 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:07.147 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.147 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:07.403 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.403 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:07.403 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.403 22:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:07.659 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.659 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:07.659 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.659 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:07.915 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.915 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:07.915 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.915 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:08.172 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.172 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:08.172 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:08.430 22:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:08.688 22:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:09.621 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:09.621 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:09.621 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:09.621 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.878 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.878 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:09.878 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.878 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:10.135 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.135 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:10.135 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.135 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:10.393 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.393 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:10.393 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.393 22:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:10.651 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.651 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:10.651 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.651 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:10.908 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.908 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:10.908 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.908 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:11.166 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.166 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:11.166 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:11.423 22:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:11.681 22:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:12.618 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:12.618 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:12.618 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.618 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:12.877 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.877 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:12.877 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.135 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:13.135 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:13.135 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:13.135 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.135 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:13.393 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.393 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:13.393 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.393 22:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:13.650 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.650 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:13.650 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.650 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:13.909 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.909 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:13.909 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.909 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:14.167 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.167 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:14.167 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:14.426 22:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:14.684 22:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:16.062 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:16.062 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:16.062 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.063 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:16.063 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:16.063 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:16.063 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.063 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:16.321 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:16.321 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:16.321 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.321 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:16.579 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.579 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:16.579 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.579 22:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:16.838 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.838 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:16.838 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.838 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:17.096 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.096 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:17.096 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.096 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:17.355 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.355 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:17.355 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:17.614 22:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:17.874 22:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:18.808 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:18.808 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:18.808 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.808 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:19.065 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.065 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:19.066 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.066 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:19.334 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.334 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:19.334 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.334 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:19.618 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.618 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:19.618 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.618 22:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:19.877 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.877 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:19.877 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.877 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:20.136 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.136 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:20.136 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.136 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:20.395 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.395 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:20.395 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:20.395 22:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:20.653 22:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:20.911 22:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:22.289 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:22.289 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:22.289 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.289 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:22.289 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.289 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:22.289 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.289 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:22.547 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.547 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:22.547 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.548 22:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:22.805 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.805 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:22.805 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.805 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:23.064 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.064 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:23.064 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.064 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:23.322 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.322 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:23.322 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.322 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:23.580 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.580 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:23.580 22:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:23.838 22:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:24.096 22:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:25.035 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:25.035 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:25.035 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.035 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:25.293 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.293 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:25.293 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.293 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:25.552 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.552 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:25.552 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.552 22:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:25.810 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.810 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:25.810 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.810 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:26.068 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.068 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:26.068 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.068 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:26.327 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.327 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:26.327 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.327 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:26.585 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.585 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:26.585 22:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:26.843 22:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:27.102 22:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:28.034 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:28.034 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:28.034 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.035 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.291 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.291 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:28.291 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.291 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.548 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.548 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.548 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.548 22:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.805 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.805 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.805 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.805 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:29.062 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.062 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:29.062 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.062 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.319 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.319 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:29.319 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.319 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.575 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.575 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:29.575 22:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:29.832 22:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:30.090 22:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:31.025 22:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:31.025 22:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:31.025 22:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.025 22:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.283 22:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.283 22:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:31.283 22:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.283 22:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.540 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.540 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.540 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.540 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.799 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.799 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.799 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.799 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:32.056 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.056 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:32.056 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.056 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.313 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.313 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:32.313 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.313 22:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1330998 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@942 -- # '[' -z 1330998 ']' 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # kill -0 1330998 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # uname 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1330998 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1330998' 00:22:32.572 killing process with pid 1330998 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill 1330998 00:22:32.572 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # wait 1330998 00:22:32.911 Connection closed with partial response: 00:22:32.911 00:22:32.911 00:22:32.911 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1330998 00:22:32.911 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:32.911 [2024-07-15 22:45:41.937458] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:22:32.911 [2024-07-15 22:45:41.937553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330998 ] 00:22:32.911 [2024-07-15 22:45:41.996557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.911 [2024-07-15 22:45:42.107499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.911 Running I/O for 90 seconds... 00:22:32.911 [2024-07-15 22:45:57.870862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.870926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.911 [2024-07-15 22:45:57.871713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.911 [2024-07-15 22:45:57.871728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.871749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.871765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.871786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.871801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.871822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.871837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.871858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.871873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.871920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.912 [2024-07-15 22:45:57.871938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.912 [2024-07-15 22:45:57.872426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.912 [2024-07-15 22:45:57.872481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.912 [2024-07-15 22:45:57.872522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.912 [2024-07-15 22:45:57.872563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.912 [2024-07-15 22:45:57.872604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.912 [2024-07-15 22:45:57.872645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.912 [2024-07-15 22:45:57.872687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.872742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.872782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.872822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.872860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.872931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.872956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.872973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.873970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.912 [2024-07-15 22:45:57.873994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.912 [2024-07-15 22:45:57.874010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.874973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.874989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.913 [2024-07-15 22:45:57.875183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.875975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.875992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.913 [2024-07-15 22:45:57.876020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.913 [2024-07-15 22:45:57.876037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:45:57.876921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.914 [2024-07-15 22:45:57.876966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.876999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.914 [2024-07-15 22:45:57.877017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.877045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.914 [2024-07-15 22:45:57.877061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.877090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.914 [2024-07-15 22:45:57.877107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.877135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.914 [2024-07-15 22:45:57.877152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.877180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.914 [2024-07-15 22:45:57.877212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:45:57.877241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.914 [2024-07-15 22:45:57.877257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.914 [2024-07-15 22:46:13.469799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.914 [2024-07-15 22:46:13.469815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.469836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.469852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.469885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.469924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.469950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.469972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.469995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.915 [2024-07-15 22:46:13.470372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.915 [2024-07-15 22:46:13.470410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.915 [2024-07-15 22:46:13.470451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.470625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.470642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.915 [2024-07-15 22:46:13.472393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.915 [2024-07-15 22:46:13.472929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.915 [2024-07-15 22:46:13.472951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.472968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.472991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.916 [2024-07-15 22:46:13.473779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.473980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.473996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.474018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.916 [2024-07-15 22:46:13.474035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.474058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.916 [2024-07-15 22:46:13.474074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.916 [2024-07-15 22:46:13.474097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.916 [2024-07-15 22:46:13.474114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.916 Received shutdown signal, test time was about 32.413606 seconds 00:22:32.916 00:22:32.916 Latency(us) 00:22:32.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.916 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.916 Verification LBA range: start 0x0 length 0x4000 00:22:32.916 Nvme0n1 : 32.41 8128.53 31.75 0.00 0.00 15720.59 370.16 4026531.84 00:22:32.916 =================================================================================================================== 00:22:32.916 Total : 8128.53 31.75 0.00 0.00 15720.59 370.16 4026531.84 00:22:32.916 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.175 rmmod nvme_tcp 00:22:33.175 rmmod nvme_fabrics 00:22:33.175 rmmod nvme_keyring 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1330712 ']' 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1330712 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@942 -- # '[' -z 1330712 ']' 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # kill -0 1330712 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # uname 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:33.175 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1330712 00:22:33.433 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:33.433 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:33.433 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1330712' 00:22:33.433 killing process with pid 1330712 00:22:33.433 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill 1330712 00:22:33.433 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # wait 1330712 00:22:33.693 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.693 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.693 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.693 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.693 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.693 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.693 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.693 22:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.603 22:46:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:35.603 00:22:35.603 real 0m41.345s 00:22:35.603 user 2m4.961s 00:22:35.603 sys 0m10.303s 00:22:35.603 22:46:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:35.603 22:46:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:35.603 ************************************ 00:22:35.603 END TEST nvmf_host_multipath_status 00:22:35.603 ************************************ 00:22:35.603 22:46:19 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:35.603 22:46:19 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:35.603 22:46:19 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:35.603 22:46:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:35.603 22:46:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:35.603 ************************************ 00:22:35.603 START TEST nvmf_discovery_remove_ifc 00:22:35.603 ************************************ 00:22:35.603 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:35.603 * Looking for test storage... 00:22:35.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:35.863 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:35.864 22:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:37.768 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:37.768 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.768 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:37.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:37.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:22:37.769 00:22:37.769 --- 10.0.0.2 ping statistics --- 00:22:37.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.769 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:22:37.769 00:22:37.769 --- 10.0.0.1 ping statistics --- 00:22:37.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.769 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1337194 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1337194 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@823 -- # '[' -z 1337194 ']' 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:37.769 22:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.769 [2024-07-15 22:46:21.229934] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:22:37.769 [2024-07-15 22:46:21.230022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.029 [2024-07-15 22:46:21.297799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.029 [2024-07-15 22:46:21.412582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.029 [2024-07-15 22:46:21.412648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.029 [2024-07-15 22:46:21.412664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.029 [2024-07-15 22:46:21.412678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.029 [2024-07-15 22:46:21.412689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.029 [2024-07-15 22:46:21.412727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # return 0 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.967 [2024-07-15 22:46:22.194636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.967 [2024-07-15 22:46:22.202794] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:38.967 null0 00:22:38.967 [2024-07-15 22:46:22.234774] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1337297 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1337297 /tmp/host.sock 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@823 -- # '[' -z 1337297 ']' 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # local rpc_addr=/tmp/host.sock 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:38.967 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:38.967 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:38.968 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.968 [2024-07-15 22:46:22.303763] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:22:38.968 [2024-07-15 22:46:22.303856] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337297 ] 00:22:38.968 [2024-07-15 22:46:22.366252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.228 [2024-07-15 22:46:22.483342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # return 0 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.228 22:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.168 [2024-07-15 22:46:23.632871] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:40.168 [2024-07-15 22:46:23.632947] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:40.168 [2024-07-15 22:46:23.632971] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:40.428 [2024-07-15 22:46:23.760435] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:40.688 [2024-07-15 22:46:23.947448] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:40.688 [2024-07-15 22:46:23.947523] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:40.688 [2024-07-15 22:46:23.947564] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:40.688 [2024-07-15 22:46:23.947589] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:40.688 [2024-07-15 22:46:23.947618] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.688 [2024-07-15 22:46:23.951251] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfbb870 was disconnected and freed. delete nvme_qpair. 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:40.688 22:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:40.688 22:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:41.627 22:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.627 22:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.627 22:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.627 22:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:41.627 22:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.627 22:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.627 22:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.627 22:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:41.886 22:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:41.886 22:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.823 22:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.823 22:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.823 22:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.823 22:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:42.823 22:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.823 22:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.823 22:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.823 22:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:42.823 22:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:42.823 22:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:43.761 22:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.761 22:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.761 22:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:43.761 22:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.761 22:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.761 22:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.761 22:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.761 22:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:43.761 22:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:43.761 22:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:45.144 22:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.144 22:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.144 22:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:45.144 22:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.144 22:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.144 22:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.144 22:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.144 22:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:45.144 22:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:45.144 22:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.082 22:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.082 22:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.082 22:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.082 22:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:46.082 22:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.082 22:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.082 22:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.082 22:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:46.082 22:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:46.082 22:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.082 [2024-07-15 22:46:29.388790] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:46.082 [2024-07-15 22:46:29.388887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.082 [2024-07-15 22:46:29.388937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.082 [2024-07-15 22:46:29.388956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.082 [2024-07-15 22:46:29.388970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.082 [2024-07-15 22:46:29.388984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.082 [2024-07-15 22:46:29.389002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.082 [2024-07-15 22:46:29.389016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.082 [2024-07-15 22:46:29.389029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.082 [2024-07-15 22:46:29.389042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.082 [2024-07-15 22:46:29.389063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.082 [2024-07-15 22:46:29.389076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82300 is same with the state(5) to be set 00:22:46.082 [2024-07-15 22:46:29.398807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf82300 (9): Bad file descriptor 00:22:46.082 [2024-07-15 22:46:29.408848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.024 22:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.024 22:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.024 22:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:47.024 22:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.024 22:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.024 22:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.024 22:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.024 [2024-07-15 22:46:30.418919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:47.024 [2024-07-15 22:46:30.418986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82300 with addr=10.0.0.2, port=4420 00:22:47.024 [2024-07-15 22:46:30.419016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82300 is same with the state(5) to be set 00:22:47.024 [2024-07-15 22:46:30.419070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf82300 (9): Bad file descriptor 00:22:47.024 [2024-07-15 22:46:30.419557] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:47.024 [2024-07-15 22:46:30.419594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:47.024 [2024-07-15 22:46:30.419613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:47.024 [2024-07-15 22:46:30.419632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:47.024 [2024-07-15 22:46:30.419667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.024 [2024-07-15 22:46:30.419688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.024 22:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:47.024 22:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:47.024 22:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:47.963 [2024-07-15 22:46:31.422197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:47.963 [2024-07-15 22:46:31.422249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:47.963 [2024-07-15 22:46:31.422263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:47.963 [2024-07-15 22:46:31.422276] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:47.963 [2024-07-15 22:46:31.422312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.963 [2024-07-15 22:46:31.422349] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:47.963 [2024-07-15 22:46:31.422407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.963 [2024-07-15 22:46:31.422430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.963 [2024-07-15 22:46:31.422450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.963 [2024-07-15 22:46:31.422463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.963 [2024-07-15 22:46:31.422476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.963 [2024-07-15 22:46:31.422489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.963 [2024-07-15 22:46:31.422502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.963 [2024-07-15 22:46:31.422515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.963 [2024-07-15 22:46:31.422529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.963 [2024-07-15 22:46:31.422542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.963 [2024-07-15 22:46:31.422554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:47.963 [2024-07-15 22:46:31.422670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf81780 (9): Bad file descriptor 00:22:47.963 [2024-07-15 22:46:31.423690] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:47.963 [2024-07-15 22:46:31.423713] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:47.963 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.963 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.963 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.963 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:47.963 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.963 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.963 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.963 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:48.227 22:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:49.191 22:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.191 22:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.191 22:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.191 22:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:49.191 22:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.191 22:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.191 22:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.191 22:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:49.191 22:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:49.191 22:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.127 [2024-07-15 22:46:33.479106] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:50.127 [2024-07-15 22:46:33.479161] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:50.127 [2024-07-15 22:46:33.479186] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:50.127 [2024-07-15 22:46:33.565454] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:50.127 22:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.127 22:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.127 22:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.127 22:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:50.127 22:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.127 22:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.127 22:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.127 22:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:50.386 22:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:50.386 22:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.386 [2024-07-15 22:46:33.792987] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:50.386 [2024-07-15 22:46:33.793055] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:50.386 [2024-07-15 22:46:33.793091] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:50.386 [2024-07-15 22:46:33.793112] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:50.386 [2024-07-15 22:46:33.793125] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:50.386 [2024-07-15 22:46:33.797507] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf89110 was disconnected and freed. delete nvme_qpair. 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1337297 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@942 -- # '[' -z 1337297 ']' 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # kill -0 1337297 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # uname 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1337297 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1337297' 00:22:51.321 killing process with pid 1337297 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # kill 1337297 00:22:51.321 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # wait 1337297 00:22:51.578 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:51.578 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:51.578 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:51.578 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.578 22:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.578 rmmod nvme_tcp 00:22:51.578 rmmod nvme_fabrics 00:22:51.578 rmmod nvme_keyring 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1337194 ']' 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1337194 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@942 -- # '[' -z 1337194 ']' 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # kill -0 1337194 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # uname 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1337194 00:22:51.578 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:22:51.835 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:22:51.835 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1337194' 00:22:51.835 killing process with pid 1337194 00:22:51.835 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # kill 1337194 00:22:51.835 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # wait 1337194 00:22:52.091 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.091 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.091 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.091 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.091 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.091 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.091 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.091 22:46:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.995 22:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.995 00:22:53.995 real 0m18.328s 00:22:53.995 user 0m26.636s 00:22:53.995 sys 0m2.993s 00:22:53.995 22:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:53.995 22:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.995 ************************************ 00:22:53.995 END TEST nvmf_discovery_remove_ifc 00:22:53.995 ************************************ 00:22:53.995 22:46:37 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:53.995 22:46:37 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:53.995 22:46:37 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:53.995 22:46:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:53.995 22:46:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.995 ************************************ 00:22:53.995 START TEST nvmf_identify_kernel_target 00:22:53.995 ************************************ 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:53.995 * Looking for test storage... 00:22:53.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.995 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.253 22:46:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:56.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:56.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.156 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:56.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:56.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:56.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:22:56.157 00:22:56.157 --- 10.0.0.2 ping statistics --- 00:22:56.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.157 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:22:56.157 00:22:56.157 --- 10.0.0.1 ping statistics --- 00:22:56.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.157 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:56.157 22:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:57.093 Waiting for block devices as requested 00:22:57.093 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:57.351 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:57.351 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:57.608 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:57.608 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:57.608 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:57.608 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:57.866 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:57.866 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:57.866 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:57.866 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:58.125 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:58.125 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:58.125 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:58.125 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:58.385 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:58.385 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:58.385 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:58.385 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:58.385 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:58.385 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:22:58.385 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:58.385 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:22:58.385 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:58.385 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:58.385 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:58.642 No valid GPT data, bailing 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:58.642 22:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:58.642 00:22:58.642 Discovery Log Number of Records 2, Generation counter 2 00:22:58.642 =====Discovery Log Entry 0====== 00:22:58.642 trtype: tcp 00:22:58.642 adrfam: ipv4 00:22:58.642 subtype: current discovery subsystem 00:22:58.642 treq: not specified, sq flow control disable supported 00:22:58.642 portid: 1 00:22:58.642 trsvcid: 4420 00:22:58.642 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:58.642 traddr: 10.0.0.1 00:22:58.642 eflags: none 00:22:58.642 sectype: none 00:22:58.642 =====Discovery Log Entry 1====== 00:22:58.642 trtype: tcp 00:22:58.642 adrfam: ipv4 00:22:58.642 subtype: nvme subsystem 00:22:58.642 treq: not specified, sq flow control disable supported 00:22:58.642 portid: 1 00:22:58.642 trsvcid: 4420 00:22:58.642 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:58.642 traddr: 10.0.0.1 00:22:58.642 eflags: none 00:22:58.642 sectype: none 00:22:58.643 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:58.643 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:58.643 ===================================================== 00:22:58.643 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:58.643 ===================================================== 00:22:58.643 Controller Capabilities/Features 00:22:58.643 ================================ 00:22:58.643 Vendor ID: 0000 00:22:58.643 Subsystem Vendor ID: 0000 00:22:58.643 Serial Number: 6b36ad35ab50a9544949 00:22:58.643 Model Number: Linux 00:22:58.643 Firmware Version: 6.7.0-68 00:22:58.643 Recommended Arb Burst: 0 00:22:58.643 IEEE OUI Identifier: 00 00 00 00:22:58.643 Multi-path I/O 00:22:58.643 May have multiple subsystem ports: No 00:22:58.643 May have multiple controllers: No 00:22:58.643 Associated with SR-IOV VF: No 00:22:58.643 Max Data Transfer Size: Unlimited 00:22:58.643 Max Number of Namespaces: 0 00:22:58.643 Max Number of I/O Queues: 1024 00:22:58.643 NVMe Specification Version (VS): 1.3 00:22:58.643 NVMe Specification Version (Identify): 1.3 00:22:58.643 Maximum Queue Entries: 1024 00:22:58.643 Contiguous Queues Required: No 00:22:58.643 Arbitration Mechanisms Supported 00:22:58.643 Weighted Round Robin: Not Supported 00:22:58.643 Vendor Specific: Not Supported 00:22:58.643 Reset Timeout: 7500 ms 00:22:58.643 Doorbell Stride: 4 bytes 00:22:58.643 NVM Subsystem Reset: Not Supported 00:22:58.643 Command Sets Supported 00:22:58.643 NVM Command Set: Supported 00:22:58.643 Boot Partition: Not Supported 00:22:58.643 Memory Page Size Minimum: 4096 bytes 00:22:58.643 Memory Page Size Maximum: 4096 bytes 00:22:58.643 Persistent Memory Region: Not Supported 00:22:58.643 Optional Asynchronous Events Supported 00:22:58.643 Namespace Attribute Notices: Not Supported 00:22:58.643 Firmware Activation Notices: Not Supported 00:22:58.643 ANA Change Notices: Not Supported 00:22:58.643 PLE Aggregate Log Change Notices: Not Supported 00:22:58.643 LBA Status Info Alert Notices: Not Supported 00:22:58.643 EGE Aggregate Log Change Notices: Not Supported 00:22:58.643 Normal NVM Subsystem Shutdown event: Not Supported 00:22:58.643 Zone Descriptor Change Notices: Not Supported 00:22:58.643 Discovery Log Change Notices: Supported 00:22:58.643 Controller Attributes 00:22:58.643 128-bit Host Identifier: Not Supported 00:22:58.643 Non-Operational Permissive Mode: Not Supported 00:22:58.643 NVM Sets: Not Supported 00:22:58.643 Read Recovery Levels: Not Supported 00:22:58.643 Endurance Groups: Not Supported 00:22:58.643 Predictable Latency Mode: Not Supported 00:22:58.643 Traffic Based Keep ALive: Not Supported 00:22:58.643 Namespace Granularity: Not Supported 00:22:58.643 SQ Associations: Not Supported 00:22:58.643 UUID List: Not Supported 00:22:58.643 Multi-Domain Subsystem: Not Supported 00:22:58.643 Fixed Capacity Management: Not Supported 00:22:58.643 Variable Capacity Management: Not Supported 00:22:58.643 Delete Endurance Group: Not Supported 00:22:58.643 Delete NVM Set: Not Supported 00:22:58.643 Extended LBA Formats Supported: Not Supported 00:22:58.643 Flexible Data Placement Supported: Not Supported 00:22:58.643 00:22:58.643 Controller Memory Buffer Support 00:22:58.643 ================================ 00:22:58.643 Supported: No 00:22:58.643 00:22:58.643 Persistent Memory Region Support 00:22:58.643 ================================ 00:22:58.643 Supported: No 00:22:58.643 00:22:58.643 Admin Command Set Attributes 00:22:58.643 ============================ 00:22:58.643 Security Send/Receive: Not Supported 00:22:58.643 Format NVM: Not Supported 00:22:58.643 Firmware Activate/Download: Not Supported 00:22:58.643 Namespace Management: Not Supported 00:22:58.643 Device Self-Test: Not Supported 00:22:58.643 Directives: Not Supported 00:22:58.643 NVMe-MI: Not Supported 00:22:58.643 Virtualization Management: Not Supported 00:22:58.643 Doorbell Buffer Config: Not Supported 00:22:58.643 Get LBA Status Capability: Not Supported 00:22:58.643 Command & Feature Lockdown Capability: Not Supported 00:22:58.643 Abort Command Limit: 1 00:22:58.643 Async Event Request Limit: 1 00:22:58.643 Number of Firmware Slots: N/A 00:22:58.643 Firmware Slot 1 Read-Only: N/A 00:22:58.643 Firmware Activation Without Reset: N/A 00:22:58.643 Multiple Update Detection Support: N/A 00:22:58.643 Firmware Update Granularity: No Information Provided 00:22:58.643 Per-Namespace SMART Log: No 00:22:58.643 Asymmetric Namespace Access Log Page: Not Supported 00:22:58.643 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:58.643 Command Effects Log Page: Not Supported 00:22:58.643 Get Log Page Extended Data: Supported 00:22:58.643 Telemetry Log Pages: Not Supported 00:22:58.643 Persistent Event Log Pages: Not Supported 00:22:58.643 Supported Log Pages Log Page: May Support 00:22:58.643 Commands Supported & Effects Log Page: Not Supported 00:22:58.643 Feature Identifiers & Effects Log Page:May Support 00:22:58.643 NVMe-MI Commands & Effects Log Page: May Support 00:22:58.643 Data Area 4 for Telemetry Log: Not Supported 00:22:58.643 Error Log Page Entries Supported: 1 00:22:58.643 Keep Alive: Not Supported 00:22:58.643 00:22:58.643 NVM Command Set Attributes 00:22:58.643 ========================== 00:22:58.643 Submission Queue Entry Size 00:22:58.643 Max: 1 00:22:58.643 Min: 1 00:22:58.643 Completion Queue Entry Size 00:22:58.643 Max: 1 00:22:58.643 Min: 1 00:22:58.643 Number of Namespaces: 0 00:22:58.643 Compare Command: Not Supported 00:22:58.643 Write Uncorrectable Command: Not Supported 00:22:58.643 Dataset Management Command: Not Supported 00:22:58.643 Write Zeroes Command: Not Supported 00:22:58.643 Set Features Save Field: Not Supported 00:22:58.643 Reservations: Not Supported 00:22:58.643 Timestamp: Not Supported 00:22:58.643 Copy: Not Supported 00:22:58.643 Volatile Write Cache: Not Present 00:22:58.643 Atomic Write Unit (Normal): 1 00:22:58.643 Atomic Write Unit (PFail): 1 00:22:58.643 Atomic Compare & Write Unit: 1 00:22:58.643 Fused Compare & Write: Not Supported 00:22:58.643 Scatter-Gather List 00:22:58.643 SGL Command Set: Supported 00:22:58.643 SGL Keyed: Not Supported 00:22:58.643 SGL Bit Bucket Descriptor: Not Supported 00:22:58.643 SGL Metadata Pointer: Not Supported 00:22:58.643 Oversized SGL: Not Supported 00:22:58.643 SGL Metadata Address: Not Supported 00:22:58.643 SGL Offset: Supported 00:22:58.643 Transport SGL Data Block: Not Supported 00:22:58.643 Replay Protected Memory Block: Not Supported 00:22:58.643 00:22:58.643 Firmware Slot Information 00:22:58.643 ========================= 00:22:58.643 Active slot: 0 00:22:58.643 00:22:58.643 00:22:58.643 Error Log 00:22:58.643 ========= 00:22:58.643 00:22:58.643 Active Namespaces 00:22:58.643 ================= 00:22:58.643 Discovery Log Page 00:22:58.643 ================== 00:22:58.643 Generation Counter: 2 00:22:58.643 Number of Records: 2 00:22:58.643 Record Format: 0 00:22:58.643 00:22:58.643 Discovery Log Entry 0 00:22:58.643 ---------------------- 00:22:58.643 Transport Type: 3 (TCP) 00:22:58.643 Address Family: 1 (IPv4) 00:22:58.643 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:58.643 Entry Flags: 00:22:58.643 Duplicate Returned Information: 0 00:22:58.643 Explicit Persistent Connection Support for Discovery: 0 00:22:58.643 Transport Requirements: 00:22:58.643 Secure Channel: Not Specified 00:22:58.643 Port ID: 1 (0x0001) 00:22:58.643 Controller ID: 65535 (0xffff) 00:22:58.643 Admin Max SQ Size: 32 00:22:58.643 Transport Service Identifier: 4420 00:22:58.643 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:58.643 Transport Address: 10.0.0.1 00:22:58.643 Discovery Log Entry 1 00:22:58.643 ---------------------- 00:22:58.643 Transport Type: 3 (TCP) 00:22:58.643 Address Family: 1 (IPv4) 00:22:58.643 Subsystem Type: 2 (NVM Subsystem) 00:22:58.643 Entry Flags: 00:22:58.643 Duplicate Returned Information: 0 00:22:58.643 Explicit Persistent Connection Support for Discovery: 0 00:22:58.643 Transport Requirements: 00:22:58.643 Secure Channel: Not Specified 00:22:58.643 Port ID: 1 (0x0001) 00:22:58.643 Controller ID: 65535 (0xffff) 00:22:58.643 Admin Max SQ Size: 32 00:22:58.643 Transport Service Identifier: 4420 00:22:58.643 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:58.643 Transport Address: 10.0.0.1 00:22:58.643 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:58.902 get_feature(0x01) failed 00:22:58.902 get_feature(0x02) failed 00:22:58.902 get_feature(0x04) failed 00:22:58.902 ===================================================== 00:22:58.902 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:58.902 ===================================================== 00:22:58.902 Controller Capabilities/Features 00:22:58.902 ================================ 00:22:58.902 Vendor ID: 0000 00:22:58.902 Subsystem Vendor ID: 0000 00:22:58.902 Serial Number: 2e9441c914a240cf72da 00:22:58.902 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:58.902 Firmware Version: 6.7.0-68 00:22:58.902 Recommended Arb Burst: 6 00:22:58.902 IEEE OUI Identifier: 00 00 00 00:22:58.902 Multi-path I/O 00:22:58.902 May have multiple subsystem ports: Yes 00:22:58.902 May have multiple controllers: Yes 00:22:58.902 Associated with SR-IOV VF: No 00:22:58.902 Max Data Transfer Size: Unlimited 00:22:58.902 Max Number of Namespaces: 1024 00:22:58.902 Max Number of I/O Queues: 128 00:22:58.902 NVMe Specification Version (VS): 1.3 00:22:58.902 NVMe Specification Version (Identify): 1.3 00:22:58.902 Maximum Queue Entries: 1024 00:22:58.902 Contiguous Queues Required: No 00:22:58.902 Arbitration Mechanisms Supported 00:22:58.902 Weighted Round Robin: Not Supported 00:22:58.902 Vendor Specific: Not Supported 00:22:58.902 Reset Timeout: 7500 ms 00:22:58.902 Doorbell Stride: 4 bytes 00:22:58.902 NVM Subsystem Reset: Not Supported 00:22:58.902 Command Sets Supported 00:22:58.902 NVM Command Set: Supported 00:22:58.902 Boot Partition: Not Supported 00:22:58.902 Memory Page Size Minimum: 4096 bytes 00:22:58.902 Memory Page Size Maximum: 4096 bytes 00:22:58.902 Persistent Memory Region: Not Supported 00:22:58.902 Optional Asynchronous Events Supported 00:22:58.902 Namespace Attribute Notices: Supported 00:22:58.902 Firmware Activation Notices: Not Supported 00:22:58.902 ANA Change Notices: Supported 00:22:58.902 PLE Aggregate Log Change Notices: Not Supported 00:22:58.902 LBA Status Info Alert Notices: Not Supported 00:22:58.902 EGE Aggregate Log Change Notices: Not Supported 00:22:58.902 Normal NVM Subsystem Shutdown event: Not Supported 00:22:58.902 Zone Descriptor Change Notices: Not Supported 00:22:58.902 Discovery Log Change Notices: Not Supported 00:22:58.902 Controller Attributes 00:22:58.902 128-bit Host Identifier: Supported 00:22:58.902 Non-Operational Permissive Mode: Not Supported 00:22:58.902 NVM Sets: Not Supported 00:22:58.902 Read Recovery Levels: Not Supported 00:22:58.902 Endurance Groups: Not Supported 00:22:58.902 Predictable Latency Mode: Not Supported 00:22:58.902 Traffic Based Keep ALive: Supported 00:22:58.902 Namespace Granularity: Not Supported 00:22:58.902 SQ Associations: Not Supported 00:22:58.902 UUID List: Not Supported 00:22:58.902 Multi-Domain Subsystem: Not Supported 00:22:58.902 Fixed Capacity Management: Not Supported 00:22:58.902 Variable Capacity Management: Not Supported 00:22:58.902 Delete Endurance Group: Not Supported 00:22:58.902 Delete NVM Set: Not Supported 00:22:58.902 Extended LBA Formats Supported: Not Supported 00:22:58.902 Flexible Data Placement Supported: Not Supported 00:22:58.902 00:22:58.902 Controller Memory Buffer Support 00:22:58.902 ================================ 00:22:58.902 Supported: No 00:22:58.902 00:22:58.902 Persistent Memory Region Support 00:22:58.902 ================================ 00:22:58.902 Supported: No 00:22:58.902 00:22:58.902 Admin Command Set Attributes 00:22:58.902 ============================ 00:22:58.902 Security Send/Receive: Not Supported 00:22:58.902 Format NVM: Not Supported 00:22:58.902 Firmware Activate/Download: Not Supported 00:22:58.902 Namespace Management: Not Supported 00:22:58.902 Device Self-Test: Not Supported 00:22:58.902 Directives: Not Supported 00:22:58.902 NVMe-MI: Not Supported 00:22:58.902 Virtualization Management: Not Supported 00:22:58.902 Doorbell Buffer Config: Not Supported 00:22:58.902 Get LBA Status Capability: Not Supported 00:22:58.902 Command & Feature Lockdown Capability: Not Supported 00:22:58.902 Abort Command Limit: 4 00:22:58.902 Async Event Request Limit: 4 00:22:58.902 Number of Firmware Slots: N/A 00:22:58.902 Firmware Slot 1 Read-Only: N/A 00:22:58.902 Firmware Activation Without Reset: N/A 00:22:58.902 Multiple Update Detection Support: N/A 00:22:58.902 Firmware Update Granularity: No Information Provided 00:22:58.902 Per-Namespace SMART Log: Yes 00:22:58.902 Asymmetric Namespace Access Log Page: Supported 00:22:58.902 ANA Transition Time : 10 sec 00:22:58.902 00:22:58.902 Asymmetric Namespace Access Capabilities 00:22:58.902 ANA Optimized State : Supported 00:22:58.902 ANA Non-Optimized State : Supported 00:22:58.902 ANA Inaccessible State : Supported 00:22:58.902 ANA Persistent Loss State : Supported 00:22:58.902 ANA Change State : Supported 00:22:58.902 ANAGRPID is not changed : No 00:22:58.902 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:58.902 00:22:58.902 ANA Group Identifier Maximum : 128 00:22:58.902 Number of ANA Group Identifiers : 128 00:22:58.902 Max Number of Allowed Namespaces : 1024 00:22:58.902 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:58.902 Command Effects Log Page: Supported 00:22:58.902 Get Log Page Extended Data: Supported 00:22:58.902 Telemetry Log Pages: Not Supported 00:22:58.902 Persistent Event Log Pages: Not Supported 00:22:58.902 Supported Log Pages Log Page: May Support 00:22:58.902 Commands Supported & Effects Log Page: Not Supported 00:22:58.902 Feature Identifiers & Effects Log Page:May Support 00:22:58.902 NVMe-MI Commands & Effects Log Page: May Support 00:22:58.902 Data Area 4 for Telemetry Log: Not Supported 00:22:58.902 Error Log Page Entries Supported: 128 00:22:58.902 Keep Alive: Supported 00:22:58.902 Keep Alive Granularity: 1000 ms 00:22:58.902 00:22:58.902 NVM Command Set Attributes 00:22:58.902 ========================== 00:22:58.902 Submission Queue Entry Size 00:22:58.902 Max: 64 00:22:58.902 Min: 64 00:22:58.902 Completion Queue Entry Size 00:22:58.902 Max: 16 00:22:58.902 Min: 16 00:22:58.902 Number of Namespaces: 1024 00:22:58.902 Compare Command: Not Supported 00:22:58.902 Write Uncorrectable Command: Not Supported 00:22:58.902 Dataset Management Command: Supported 00:22:58.902 Write Zeroes Command: Supported 00:22:58.902 Set Features Save Field: Not Supported 00:22:58.902 Reservations: Not Supported 00:22:58.902 Timestamp: Not Supported 00:22:58.902 Copy: Not Supported 00:22:58.902 Volatile Write Cache: Present 00:22:58.902 Atomic Write Unit (Normal): 1 00:22:58.902 Atomic Write Unit (PFail): 1 00:22:58.902 Atomic Compare & Write Unit: 1 00:22:58.902 Fused Compare & Write: Not Supported 00:22:58.902 Scatter-Gather List 00:22:58.902 SGL Command Set: Supported 00:22:58.902 SGL Keyed: Not Supported 00:22:58.902 SGL Bit Bucket Descriptor: Not Supported 00:22:58.902 SGL Metadata Pointer: Not Supported 00:22:58.902 Oversized SGL: Not Supported 00:22:58.902 SGL Metadata Address: Not Supported 00:22:58.902 SGL Offset: Supported 00:22:58.902 Transport SGL Data Block: Not Supported 00:22:58.902 Replay Protected Memory Block: Not Supported 00:22:58.902 00:22:58.902 Firmware Slot Information 00:22:58.902 ========================= 00:22:58.902 Active slot: 0 00:22:58.902 00:22:58.902 Asymmetric Namespace Access 00:22:58.902 =========================== 00:22:58.902 Change Count : 0 00:22:58.902 Number of ANA Group Descriptors : 1 00:22:58.902 ANA Group Descriptor : 0 00:22:58.902 ANA Group ID : 1 00:22:58.902 Number of NSID Values : 1 00:22:58.902 Change Count : 0 00:22:58.902 ANA State : 1 00:22:58.902 Namespace Identifier : 1 00:22:58.902 00:22:58.902 Commands Supported and Effects 00:22:58.902 ============================== 00:22:58.902 Admin Commands 00:22:58.902 -------------- 00:22:58.902 Get Log Page (02h): Supported 00:22:58.902 Identify (06h): Supported 00:22:58.902 Abort (08h): Supported 00:22:58.902 Set Features (09h): Supported 00:22:58.902 Get Features (0Ah): Supported 00:22:58.903 Asynchronous Event Request (0Ch): Supported 00:22:58.903 Keep Alive (18h): Supported 00:22:58.903 I/O Commands 00:22:58.903 ------------ 00:22:58.903 Flush (00h): Supported 00:22:58.903 Write (01h): Supported LBA-Change 00:22:58.903 Read (02h): Supported 00:22:58.903 Write Zeroes (08h): Supported LBA-Change 00:22:58.903 Dataset Management (09h): Supported 00:22:58.903 00:22:58.903 Error Log 00:22:58.903 ========= 00:22:58.903 Entry: 0 00:22:58.903 Error Count: 0x3 00:22:58.903 Submission Queue Id: 0x0 00:22:58.903 Command Id: 0x5 00:22:58.903 Phase Bit: 0 00:22:58.903 Status Code: 0x2 00:22:58.903 Status Code Type: 0x0 00:22:58.903 Do Not Retry: 1 00:22:58.903 Error Location: 0x28 00:22:58.903 LBA: 0x0 00:22:58.903 Namespace: 0x0 00:22:58.903 Vendor Log Page: 0x0 00:22:58.903 ----------- 00:22:58.903 Entry: 1 00:22:58.903 Error Count: 0x2 00:22:58.903 Submission Queue Id: 0x0 00:22:58.903 Command Id: 0x5 00:22:58.903 Phase Bit: 0 00:22:58.903 Status Code: 0x2 00:22:58.903 Status Code Type: 0x0 00:22:58.903 Do Not Retry: 1 00:22:58.903 Error Location: 0x28 00:22:58.903 LBA: 0x0 00:22:58.903 Namespace: 0x0 00:22:58.903 Vendor Log Page: 0x0 00:22:58.903 ----------- 00:22:58.903 Entry: 2 00:22:58.903 Error Count: 0x1 00:22:58.903 Submission Queue Id: 0x0 00:22:58.903 Command Id: 0x4 00:22:58.903 Phase Bit: 0 00:22:58.903 Status Code: 0x2 00:22:58.903 Status Code Type: 0x0 00:22:58.903 Do Not Retry: 1 00:22:58.903 Error Location: 0x28 00:22:58.903 LBA: 0x0 00:22:58.903 Namespace: 0x0 00:22:58.903 Vendor Log Page: 0x0 00:22:58.903 00:22:58.903 Number of Queues 00:22:58.903 ================ 00:22:58.903 Number of I/O Submission Queues: 128 00:22:58.903 Number of I/O Completion Queues: 128 00:22:58.903 00:22:58.903 ZNS Specific Controller Data 00:22:58.903 ============================ 00:22:58.903 Zone Append Size Limit: 0 00:22:58.903 00:22:58.903 00:22:58.903 Active Namespaces 00:22:58.903 ================= 00:22:58.903 get_feature(0x05) failed 00:22:58.903 Namespace ID:1 00:22:58.903 Command Set Identifier: NVM (00h) 00:22:58.903 Deallocate: Supported 00:22:58.903 Deallocated/Unwritten Error: Not Supported 00:22:58.903 Deallocated Read Value: Unknown 00:22:58.903 Deallocate in Write Zeroes: Not Supported 00:22:58.903 Deallocated Guard Field: 0xFFFF 00:22:58.903 Flush: Supported 00:22:58.903 Reservation: Not Supported 00:22:58.903 Namespace Sharing Capabilities: Multiple Controllers 00:22:58.903 Size (in LBAs): 1953525168 (931GiB) 00:22:58.903 Capacity (in LBAs): 1953525168 (931GiB) 00:22:58.903 Utilization (in LBAs): 1953525168 (931GiB) 00:22:58.903 UUID: b6a0385f-27ea-4723-92b1-a931512bd6c6 00:22:58.903 Thin Provisioning: Not Supported 00:22:58.903 Per-NS Atomic Units: Yes 00:22:58.903 Atomic Boundary Size (Normal): 0 00:22:58.903 Atomic Boundary Size (PFail): 0 00:22:58.903 Atomic Boundary Offset: 0 00:22:58.903 NGUID/EUI64 Never Reused: No 00:22:58.903 ANA group ID: 1 00:22:58.903 Namespace Write Protected: No 00:22:58.903 Number of LBA Formats: 1 00:22:58.903 Current LBA Format: LBA Format #00 00:22:58.903 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:58.903 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:58.903 rmmod nvme_tcp 00:22:58.903 rmmod nvme_fabrics 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.903 22:46:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.805 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:00.806 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:00.806 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:01.066 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:01.066 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:01.066 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:01.066 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:01.066 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:01.066 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:01.066 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:01.066 22:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:02.001 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:02.001 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:02.001 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:02.001 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:02.001 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:02.001 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:02.001 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:02.001 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:02.001 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:02.001 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:02.001 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:02.001 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:02.001 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:02.001 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:02.260 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:02.260 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:03.198 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:03.198 00:23:03.198 real 0m9.100s 00:23:03.198 user 0m1.887s 00:23:03.198 sys 0m3.182s 00:23:03.198 22:46:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:03.198 22:46:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.198 ************************************ 00:23:03.198 END TEST nvmf_identify_kernel_target 00:23:03.198 ************************************ 00:23:03.198 22:46:46 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:23:03.198 22:46:46 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:03.198 22:46:46 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:23:03.198 22:46:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:03.198 22:46:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:03.198 ************************************ 00:23:03.198 START TEST nvmf_auth_host 00:23:03.198 ************************************ 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:03.198 * Looking for test storage... 00:23:03.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:03.198 22:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:03.199 22:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.135 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:05.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:05.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:05.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:05.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:05.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:23:05.136 00:23:05.136 --- 10.0.0.2 ping statistics --- 00:23:05.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.136 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:23:05.136 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:23:05.397 00:23:05.397 --- 10.0.0.1 ping statistics --- 00:23:05.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.397 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1344315 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1344315 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@823 -- # '[' -z 1344315 ']' 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:05.397 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.656 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:05.656 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # return 0 00:23:05.656 22:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.656 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:05.656 22:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=07435b70cff0b48f7471583753f61218 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aLJ 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 07435b70cff0b48f7471583753f61218 0 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 07435b70cff0b48f7471583753f61218 0 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=07435b70cff0b48f7471583753f61218 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aLJ 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aLJ 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.aLJ 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:05.656 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=85001a277766ad56804b23a0a797ee57391c5c13ac49dd2058a28869fcd57e81 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bgA 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 85001a277766ad56804b23a0a797ee57391c5c13ac49dd2058a28869fcd57e81 3 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 85001a277766ad56804b23a0a797ee57391c5c13ac49dd2058a28869fcd57e81 3 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=85001a277766ad56804b23a0a797ee57391c5c13ac49dd2058a28869fcd57e81 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bgA 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bgA 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.bgA 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=238f8abc2f5c72e4d9e0c70b0f8a571b71e6c9451d381d69 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MwK 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 238f8abc2f5c72e4d9e0c70b0f8a571b71e6c9451d381d69 0 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 238f8abc2f5c72e4d9e0c70b0f8a571b71e6c9451d381d69 0 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=238f8abc2f5c72e4d9e0c70b0f8a571b71e6c9451d381d69 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:05.657 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:05.915 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MwK 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MwK 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.MwK 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c4441cb7eca0e67b521c4ce6e8a61165f749b0e0967feea4 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Xmf 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c4441cb7eca0e67b521c4ce6e8a61165f749b0e0967feea4 2 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c4441cb7eca0e67b521c4ce6e8a61165f749b0e0967feea4 2 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c4441cb7eca0e67b521c4ce6e8a61165f749b0e0967feea4 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Xmf 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Xmf 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Xmf 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4835771440c2d6baa7a54aeee4192a01 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.suk 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4835771440c2d6baa7a54aeee4192a01 1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4835771440c2d6baa7a54aeee4192a01 1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4835771440c2d6baa7a54aeee4192a01 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.suk 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.suk 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.suk 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2a858c45e49cc2490bb103889f073c71 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.oLd 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2a858c45e49cc2490bb103889f073c71 1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2a858c45e49cc2490bb103889f073c71 1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2a858c45e49cc2490bb103889f073c71 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.oLd 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.oLd 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.oLd 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9c913a455ad5c187a66737da13613525428efcd5187d7da8 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ocp 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9c913a455ad5c187a66737da13613525428efcd5187d7da8 2 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9c913a455ad5c187a66737da13613525428efcd5187d7da8 2 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9c913a455ad5c187a66737da13613525428efcd5187d7da8 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ocp 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ocp 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ocp 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b769a2f1fbf8b2b06df29cdda4aeac0b 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.TvS 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b769a2f1fbf8b2b06df29cdda4aeac0b 0 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b769a2f1fbf8b2b06df29cdda4aeac0b 0 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b769a2f1fbf8b2b06df29cdda4aeac0b 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:05.916 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.TvS 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.TvS 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.TvS 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e64b831f53eae66a190dbe1733d6210918e5685055ddd7b72c93bbdb11e4c5a8 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qlj 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e64b831f53eae66a190dbe1733d6210918e5685055ddd7b72c93bbdb11e4c5a8 3 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e64b831f53eae66a190dbe1733d6210918e5685055ddd7b72c93bbdb11e4c5a8 3 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e64b831f53eae66a190dbe1733d6210918e5685055ddd7b72c93bbdb11e4c5a8 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qlj 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qlj 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qlj 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1344315 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@823 -- # '[' -z 1344315 ']' 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:06.175 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # return 0 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aLJ 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.bgA ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bgA 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.MwK 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Xmf ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xmf 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.suk 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.oLd ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oLd 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ocp 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.TvS ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.TvS 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qlj 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.433 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:06.434 22:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:07.368 Waiting for block devices as requested 00:23:07.627 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:07.627 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:07.887 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:07.887 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:07.887 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:08.147 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:08.147 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:08.147 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:08.147 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:08.407 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:08.407 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:08.407 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:08.666 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:08.666 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:08.666 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:08.666 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:08.923 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:09.181 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:09.181 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:09.181 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:09.181 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:23:09.181 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:09.182 No valid GPT data, bailing 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:09.182 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:09.439 00:23:09.439 Discovery Log Number of Records 2, Generation counter 2 00:23:09.439 =====Discovery Log Entry 0====== 00:23:09.439 trtype: tcp 00:23:09.439 adrfam: ipv4 00:23:09.439 subtype: current discovery subsystem 00:23:09.439 treq: not specified, sq flow control disable supported 00:23:09.439 portid: 1 00:23:09.439 trsvcid: 4420 00:23:09.439 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:09.439 traddr: 10.0.0.1 00:23:09.439 eflags: none 00:23:09.439 sectype: none 00:23:09.439 =====Discovery Log Entry 1====== 00:23:09.439 trtype: tcp 00:23:09.439 adrfam: ipv4 00:23:09.439 subtype: nvme subsystem 00:23:09.439 treq: not specified, sq flow control disable supported 00:23:09.439 portid: 1 00:23:09.439 trsvcid: 4420 00:23:09.439 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:09.439 traddr: 10.0.0.1 00:23:09.439 eflags: none 00:23:09.439 sectype: none 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.439 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.440 nvme0n1 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.440 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.697 22:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.697 nvme0n1 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.697 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.955 nvme0n1 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.955 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.213 nvme0n1 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.213 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.471 nvme0n1 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.471 22:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.729 nvme0n1 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.730 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.988 nvme0n1 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.988 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.246 nvme0n1 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.246 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.504 nvme0n1 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.504 22:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.762 nvme0n1 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:11.762 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.763 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.021 nvme0n1 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.021 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.280 nvme0n1 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.280 22:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.538 nvme0n1 00:23:12.538 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.538 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.538 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.538 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.538 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.538 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.796 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.797 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.797 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.797 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.797 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.797 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.797 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.797 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.797 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.797 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.797 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.055 nvme0n1 00:23:13.055 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.056 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 nvme0n1 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.314 22:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.880 nvme0n1 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.880 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.881 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.447 nvme0n1 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.447 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.448 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.448 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.448 22:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.448 22:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.448 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.448 22:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.014 nvme0n1 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.014 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.580 nvme0n1 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.580 22:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.147 nvme0n1 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.147 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.148 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.406 nvme0n1 00:23:16.406 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.406 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.406 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.406 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.406 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:16.664 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.665 22:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.630 nvme0n1 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:17.630 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.631 22:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.564 nvme0n1 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.564 22:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.496 nvme0n1 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:19.496 22:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.428 nvme0n1 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.428 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.429 22:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.798 nvme0n1 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.798 22:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.798 nvme0n1 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.798 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.054 nvme0n1 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.054 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.310 nvme0n1 00:23:22.310 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.311 nvme0n1 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.311 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.568 22:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:22.569 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.569 22:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.569 nvme0n1 00:23:22.569 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.569 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.569 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.569 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.569 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.569 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.569 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.569 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.569 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.569 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.825 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.825 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.825 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.825 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:22.825 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.825 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.825 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:22.825 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.825 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.826 nvme0n1 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.826 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.082 nvme0n1 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.082 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.340 nvme0n1 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.340 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.599 22:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.599 nvme0n1 00:23:23.599 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.599 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.599 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.599 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.599 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.599 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 nvme0n1 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.857 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.115 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.373 nvme0n1 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.373 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.374 22:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.632 nvme0n1 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.632 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.198 nvme0n1 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.198 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.455 nvme0n1 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.455 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.456 22:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.714 nvme0n1 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.714 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.279 nvme0n1 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:26.279 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.280 22:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.844 nvme0n1 00:23:26.844 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.844 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.844 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.844 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.844 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.844 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.102 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.126 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.692 nvme0n1 00:23:27.692 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.692 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.692 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.692 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.693 22:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.693 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.693 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.693 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.693 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.693 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.693 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.693 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:27.693 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.693 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.263 nvme0n1 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.263 22:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.832 nvme0n1 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.832 22:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.771 nvme0n1 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:29.771 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.772 22:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.735 nvme0n1 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.735 22:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.679 nvme0n1 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.679 22:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.680 22:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.680 22:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.680 22:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.680 22:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.680 22:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.680 22:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.680 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:31.680 22:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.628 nvme0n1 00:23:32.628 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.628 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.628 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.628 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.628 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.628 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:32.887 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.888 22:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.827 nvme0n1 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.827 nvme0n1 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.827 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.088 nvme0n1 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.088 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.089 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.089 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.089 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.089 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.089 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.089 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.089 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.089 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.089 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.348 nvme0n1 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.348 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.608 nvme0n1 00:23:34.608 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.608 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.608 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.608 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.608 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.608 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.608 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.608 22:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.608 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.608 22:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.608 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.609 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.867 nvme0n1 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:34.867 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.868 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.126 nvme0n1 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:35.126 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.127 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.385 nvme0n1 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.385 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.645 nvme0n1 00:23:35.645 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.646 22:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.646 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.908 nvme0n1 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.908 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.168 nvme0n1 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.168 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.428 nvme0n1 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.428 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.429 22:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.000 nvme0n1 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.000 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.261 nvme0n1 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.261 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.522 nvme0n1 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.522 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.523 22:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.782 nvme0n1 00:23:37.782 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.782 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.782 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.782 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.782 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.782 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.782 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.782 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.782 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.782 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:38.042 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.611 nvme0n1 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:38.611 22:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.176 nvme0n1 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:39.176 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:39.177 22:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.743 nvme0n1 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:39.743 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:39.744 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.310 nvme0n1 00:23:40.310 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:40.310 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.310 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:40.310 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.310 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.310 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:40.310 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.310 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.310 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:40.311 22:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.878 nvme0n1 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDc0MzViNzBjZmYwYjQ4Zjc0NzE1ODM3NTNmNjEyMThRYlY1: 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: ]] 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODUwMDFhMjc3NzY2YWQ1NjgwNGIyM2EwYTc5N2VlNTczOTFjNWMxM2FjNDlkZDIwNThhMjg4NjlmY2Q1N2U4MVJZ0+M=: 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.878 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:40.879 22:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 nvme0n1 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:42.074 22:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.005 nvme0n1 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgzNTc3MTQ0MGMyZDZiYWE3YTU0YWVlZTQxOTJhMDHMotSJ: 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: ]] 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE4NThjNDVlNDljYzI0OTBiYjEwMzg4OWYwNzNjNzHnOABX: 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:43.005 22:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.944 nvme0n1 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM5MTNhNDU1YWQ1YzE4N2E2NjczN2RhMTM2MTM1MjU0MjhlZmNkNTE4N2Q3ZGE4uSa3NQ==: 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: ]] 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc2OWEyZjFmYmY4YjJiMDZkZjI5Y2RkYTRhZWFjMGIX2UzF: 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:43.944 22:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.925 nvme0n1 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.925 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTY0YjgzMWY1M2VhZTY2YTE5MGRiZTE3MzNkNjIxMDkxOGU1Njg1MDU1ZGRkN2I3MmM5M2JiZGIxMWU0YzVhOKEY3uM=: 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:44.926 22:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.862 nvme0n1 00:23:45.862 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:45.862 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.862 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.862 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:45.862 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.862 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:46.120 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.120 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.120 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:46.120 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.120 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:46.120 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:46.120 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.120 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:46.120 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.120 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM4ZjhhYmMyZjVjNzJlNGQ5ZTBjNzBiMGY4YTU3MWI3MWU2Yzk0NTFkMzgxZDY5Z4VzIQ==: 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQ0NDFjYjdlY2EwZTY3YjUyMWM0Y2U2ZThhNjExNjVmNzQ5YjBlMDk2N2ZlZWE058Yj3w==: 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.121 request: 00:23:46.121 { 00:23:46.121 "name": "nvme0", 00:23:46.121 "trtype": "tcp", 00:23:46.121 "traddr": "10.0.0.1", 00:23:46.121 "adrfam": "ipv4", 00:23:46.121 "trsvcid": "4420", 00:23:46.121 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:46.121 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:46.121 "prchk_reftag": false, 00:23:46.121 "prchk_guard": false, 00:23:46.121 "hdgst": false, 00:23:46.121 "ddgst": false, 00:23:46.121 "method": "bdev_nvme_attach_controller", 00:23:46.121 "req_id": 1 00:23:46.121 } 00:23:46.121 Got JSON-RPC error response 00:23:46.121 response: 00:23:46.121 { 00:23:46.121 "code": -5, 00:23:46.121 "message": "Input/output error" 00:23:46.121 } 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.121 request: 00:23:46.121 { 00:23:46.121 "name": "nvme0", 00:23:46.121 "trtype": "tcp", 00:23:46.121 "traddr": "10.0.0.1", 00:23:46.121 "adrfam": "ipv4", 00:23:46.121 "trsvcid": "4420", 00:23:46.121 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:46.121 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:46.121 "prchk_reftag": false, 00:23:46.121 "prchk_guard": false, 00:23:46.121 "hdgst": false, 00:23:46.121 "ddgst": false, 00:23:46.121 "dhchap_key": "key2", 00:23:46.121 "method": "bdev_nvme_attach_controller", 00:23:46.121 "req_id": 1 00:23:46.121 } 00:23:46.121 Got JSON-RPC error response 00:23:46.121 response: 00:23:46.121 { 00:23:46.121 "code": -5, 00:23:46.121 "message": "Input/output error" 00:23:46.121 } 00:23:46.121 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:46.122 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:23:46.122 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:46.122 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:46.122 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:46.122 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.122 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:46.122 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:46.122 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.380 request: 00:23:46.380 { 00:23:46.380 "name": "nvme0", 00:23:46.380 "trtype": "tcp", 00:23:46.380 "traddr": "10.0.0.1", 00:23:46.380 "adrfam": "ipv4", 00:23:46.380 "trsvcid": "4420", 00:23:46.380 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:46.380 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:46.380 "prchk_reftag": false, 00:23:46.380 "prchk_guard": false, 00:23:46.380 "hdgst": false, 00:23:46.380 "ddgst": false, 00:23:46.380 "dhchap_key": "key1", 00:23:46.380 "dhchap_ctrlr_key": "ckey2", 00:23:46.380 "method": "bdev_nvme_attach_controller", 00:23:46.380 "req_id": 1 00:23:46.380 } 00:23:46.380 Got JSON-RPC error response 00:23:46.380 response: 00:23:46.380 { 00:23:46.380 "code": -5, 00:23:46.380 "message": "Input/output error" 00:23:46.380 } 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.380 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.381 rmmod nvme_tcp 00:23:46.381 rmmod nvme_fabrics 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1344315 ']' 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1344315 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@942 -- # '[' -z 1344315 ']' 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # kill -0 1344315 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # uname 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1344315 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1344315' 00:23:46.381 killing process with pid 1344315 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@961 -- # kill 1344315 00:23:46.381 22:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # wait 1344315 00:23:46.639 22:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.639 22:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:46.639 22:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:46.639 22:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.639 22:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.639 22:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.639 22:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.639 22:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:49.175 22:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:50.111 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:50.111 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:50.111 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:50.111 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:50.111 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:50.111 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:50.111 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:50.111 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:50.111 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:50.111 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:50.111 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:50.111 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:50.111 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:50.111 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:50.111 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:50.111 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:51.043 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:51.043 22:47:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.aLJ /tmp/spdk.key-null.MwK /tmp/spdk.key-sha256.suk /tmp/spdk.key-sha384.ocp /tmp/spdk.key-sha512.qlj /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:51.043 22:47:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:52.418 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:52.418 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:52.418 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:52.418 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:52.418 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:52.418 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:52.418 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:52.418 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:52.418 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:52.418 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:52.418 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:52.418 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:52.418 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:52.418 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:52.418 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:52.418 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:52.418 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:52.418 00:23:52.418 real 0m49.156s 00:23:52.418 user 0m46.928s 00:23:52.418 sys 0m5.642s 00:23:52.418 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:52.418 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.418 ************************************ 00:23:52.418 END TEST nvmf_auth_host 00:23:52.419 ************************************ 00:23:52.419 22:47:35 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:23:52.419 22:47:35 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:52.419 22:47:35 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:52.419 22:47:35 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:23:52.419 22:47:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:52.419 22:47:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.419 ************************************ 00:23:52.419 START TEST nvmf_digest 00:23:52.419 ************************************ 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:52.419 * Looking for test storage... 00:23:52.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.419 22:47:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.320 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:54.321 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:54.321 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:54.321 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:54.321 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.321 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:23:54.581 00:23:54.581 --- 10.0.0.2 ping statistics --- 00:23:54.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.581 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:23:54.581 00:23:54.581 --- 10.0.0.1 ping statistics --- 00:23:54.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.581 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:54.581 ************************************ 00:23:54.581 START TEST nvmf_digest_clean 00:23:54.581 ************************************ 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1117 -- # run_digest 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1353747 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1353747 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1353747 ']' 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:54.581 22:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:54.581 [2024-07-15 22:47:37.952018] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:23:54.581 [2024-07-15 22:47:37.952106] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.581 [2024-07-15 22:47:38.020269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.839 [2024-07-15 22:47:38.140159] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.839 [2024-07-15 22:47:38.140223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.839 [2024-07-15 22:47:38.140239] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.839 [2024-07-15 22:47:38.140252] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.839 [2024-07-15 22:47:38.140264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.839 [2024-07-15 22:47:38.140296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:54.839 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:54.839 null0 00:23:54.839 [2024-07-15 22:47:38.334935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.098 [2024-07-15 22:47:38.359146] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1353769 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1353769 /var/tmp/bperf.sock 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1353769 ']' 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:55.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:55.098 22:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:55.098 [2024-07-15 22:47:38.410089] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:23:55.098 [2024-07-15 22:47:38.410169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353769 ] 00:23:55.098 [2024-07-15 22:47:38.476475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.098 [2024-07-15 22:47:38.592277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.031 22:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:56.031 22:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:23:56.031 22:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:56.031 22:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:56.031 22:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:56.289 22:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:56.289 22:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:56.856 nvme0n1 00:23:56.856 22:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:56.856 22:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:56.856 Running I/O for 2 seconds... 00:23:58.760 00:23:58.760 Latency(us) 00:23:58.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.760 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:58.760 nvme0n1 : 2.01 20226.47 79.01 0.00 0.00 6320.00 3495.25 15825.73 00:23:58.760 =================================================================================================================== 00:23:58.760 Total : 20226.47 79.01 0.00 0.00 6320.00 3495.25 15825.73 00:23:58.760 0 00:23:59.019 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:59.019 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:59.019 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:59.019 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:59.019 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:59.019 | select(.opcode=="crc32c") 00:23:59.019 | "\(.module_name) \(.executed)"' 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1353769 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1353769 ']' 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1353769 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1353769 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1353769' 00:23:59.278 killing process with pid 1353769 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1353769 00:23:59.278 Received shutdown signal, test time was about 2.000000 seconds 00:23:59.278 00:23:59.278 Latency(us) 00:23:59.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.278 =================================================================================================================== 00:23:59.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.278 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1353769 00:23:59.536 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1354305 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1354305 /var/tmp/bperf.sock 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1354305 ']' 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:59.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:59.537 22:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:59.537 [2024-07-15 22:47:42.886910] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:23:59.537 [2024-07-15 22:47:42.887005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354305 ] 00:23:59.537 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:59.537 Zero copy mechanism will not be used. 00:23:59.537 [2024-07-15 22:47:42.950335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.793 [2024-07-15 22:47:43.070995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.793 22:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:59.793 22:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:23:59.793 22:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:59.793 22:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:59.793 22:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:00.050 22:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:00.050 22:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:00.616 nvme0n1 00:24:00.616 22:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:00.616 22:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:00.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:00.616 Zero copy mechanism will not be used. 00:24:00.616 Running I/O for 2 seconds... 00:24:03.185 00:24:03.186 Latency(us) 00:24:03.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.186 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:03.186 nvme0n1 : 2.05 1960.85 245.11 0.00 0.00 8004.68 6650.69 48351.00 00:24:03.186 =================================================================================================================== 00:24:03.186 Total : 1960.85 245.11 0.00 0.00 8004.68 6650.69 48351.00 00:24:03.186 0 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:03.186 | select(.opcode=="crc32c") 00:24:03.186 | "\(.module_name) \(.executed)"' 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1354305 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1354305 ']' 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1354305 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1354305 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1354305' 00:24:03.186 killing process with pid 1354305 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1354305 00:24:03.186 Received shutdown signal, test time was about 2.000000 seconds 00:24:03.186 00:24:03.186 Latency(us) 00:24:03.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.186 =================================================================================================================== 00:24:03.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.186 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1354305 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1354718 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1354718 /var/tmp/bperf.sock 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1354718 ']' 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:03.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:03.444 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:03.444 [2024-07-15 22:47:46.743669] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:03.444 [2024-07-15 22:47:46.743768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354718 ] 00:24:03.444 [2024-07-15 22:47:46.805635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.444 [2024-07-15 22:47:46.920699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.701 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:03.701 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:24:03.701 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:03.701 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:03.701 22:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:03.959 22:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:03.959 22:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.216 nvme0n1 00:24:04.475 22:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:04.475 22:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:04.475 Running I/O for 2 seconds... 00:24:06.405 00:24:06.405 Latency(us) 00:24:06.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.405 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:06.405 nvme0n1 : 2.00 21662.12 84.62 0.00 0.00 5899.55 3203.98 14563.56 00:24:06.405 =================================================================================================================== 00:24:06.405 Total : 21662.12 84.62 0.00 0.00 5899.55 3203.98 14563.56 00:24:06.405 0 00:24:06.405 22:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:06.405 22:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:06.405 22:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:06.405 22:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:06.405 22:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:06.405 | select(.opcode=="crc32c") 00:24:06.405 | "\(.module_name) \(.executed)"' 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1354718 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1354718 ']' 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1354718 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1354718 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1354718' 00:24:06.665 killing process with pid 1354718 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1354718 00:24:06.665 Received shutdown signal, test time was about 2.000000 seconds 00:24:06.665 00:24:06.665 Latency(us) 00:24:06.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.665 =================================================================================================================== 00:24:06.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.665 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1354718 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1355215 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1355215 /var/tmp/bperf.sock 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1355215 ']' 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:06.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:06.922 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 [2024-07-15 22:47:50.451648] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:07.180 [2024-07-15 22:47:50.451740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355215 ] 00:24:07.180 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:07.180 Zero copy mechanism will not be used. 00:24:07.180 [2024-07-15 22:47:50.513578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.180 [2024-07-15 22:47:50.628380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.180 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:07.180 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:24:07.180 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:07.180 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:07.180 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:07.746 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.746 22:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:08.005 nvme0n1 00:24:08.005 22:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:08.006 22:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:08.006 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:08.006 Zero copy mechanism will not be used. 00:24:08.006 Running I/O for 2 seconds... 00:24:10.552 00:24:10.552 Latency(us) 00:24:10.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.552 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:10.552 nvme0n1 : 2.01 1771.69 221.46 0.00 0.00 9010.98 3228.25 10728.49 00:24:10.552 =================================================================================================================== 00:24:10.552 Total : 1771.69 221.46 0.00 0.00 9010.98 3228.25 10728.49 00:24:10.552 0 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:10.552 | select(.opcode=="crc32c") 00:24:10.552 | "\(.module_name) \(.executed)"' 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1355215 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1355215 ']' 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1355215 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1355215 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1355215' 00:24:10.552 killing process with pid 1355215 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1355215 00:24:10.552 Received shutdown signal, test time was about 2.000000 seconds 00:24:10.552 00:24:10.552 Latency(us) 00:24:10.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.552 =================================================================================================================== 00:24:10.552 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.552 22:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1355215 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1353747 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1353747 ']' 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1353747 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1353747 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1353747' 00:24:10.552 killing process with pid 1353747 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1353747 00:24:10.552 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1353747 00:24:11.120 00:24:11.120 real 0m16.415s 00:24:11.120 user 0m32.331s 00:24:11.120 sys 0m3.884s 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:11.120 ************************************ 00:24:11.120 END TEST nvmf_digest_clean 00:24:11.120 ************************************ 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1136 -- # return 0 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:11.120 ************************************ 00:24:11.120 START TEST nvmf_digest_error 00:24:11.120 ************************************ 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1117 -- # run_digest_error 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1355680 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1355680 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1355680 ']' 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:11.120 22:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.120 [2024-07-15 22:47:54.423530] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:11.120 [2024-07-15 22:47:54.423621] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.120 [2024-07-15 22:47:54.492611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.120 [2024-07-15 22:47:54.609252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.120 [2024-07-15 22:47:54.609313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.120 [2024-07-15 22:47:54.609338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.120 [2024-07-15 22:47:54.609351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.120 [2024-07-15 22:47:54.609363] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.120 [2024-07-15 22:47:54.609399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.052 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:12.052 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:24:12.052 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.052 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:12.052 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.052 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.053 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:12.053 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:12.053 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.053 [2024-07-15 22:47:55.447993] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:12.053 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:12.053 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:12.053 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:12.053 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:12.053 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.311 null0 00:24:12.311 [2024-07-15 22:47:55.572692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.311 [2024-07-15 22:47:55.596924] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1355834 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1355834 /var/tmp/bperf.sock 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1355834 ']' 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:12.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:12.311 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.311 [2024-07-15 22:47:55.644656] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:12.311 [2024-07-15 22:47:55.644753] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355834 ] 00:24:12.311 [2024-07-15 22:47:55.707377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.570 [2024-07-15 22:47:55.824253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.570 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:12.570 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:24:12.570 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:12.570 22:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:12.828 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:12.828 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:12.828 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.828 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:12.828 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:12.828 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.396 nvme0n1 00:24:13.396 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:13.396 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.396 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.396 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.396 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:13.396 22:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:13.396 Running I/O for 2 seconds... 00:24:13.396 [2024-07-15 22:47:56.874821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.396 [2024-07-15 22:47:56.874896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.396 [2024-07-15 22:47:56.874919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.396 [2024-07-15 22:47:56.889011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.396 [2024-07-15 22:47:56.889043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.396 [2024-07-15 22:47:56.889061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:56.902783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:56.902814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:56.902837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:56.913939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:56.913970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:56.913990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:56.928665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:56.928695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:56.928715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:56.939742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:56.939769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:56.939785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:56.953681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:56.953712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:56.953732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:56.966645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:56.966675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:56.966694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:56.979302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:56.979331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:56.979349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:56.990928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:56.990958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:56.990981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.004476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.004507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.004548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.017485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.017516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.017535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.029840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.029900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.029917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.043091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.043121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.043138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.056042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.056074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.056090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.068392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.068423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.068442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.081708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.081738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.081757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.092935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.092963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.092981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.105733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.105776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.105792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.119319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.119367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.119385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.130281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.130309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.130327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.655 [2024-07-15 22:47:57.144212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.655 [2024-07-15 22:47:57.144244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.655 [2024-07-15 22:47:57.144262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.157179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.157210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.157227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.168066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.168095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.168112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.181666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.181696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.181721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.194375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.194404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.194421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.207281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.207310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.207331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.220064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.220095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.220111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.232944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.232975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.232992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.245790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.245820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.245839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.257802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.257832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.257852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.270735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.270762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.270779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.284328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.284369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.284384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.296511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.296541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.296559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.308928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.308958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.308975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.321101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.321131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.321149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.334119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.334164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.334190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.346918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.346947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.346965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.360715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.360745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.914 [2024-07-15 22:47:57.360763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.914 [2024-07-15 22:47:57.372522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.914 [2024-07-15 22:47:57.372568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.915 [2024-07-15 22:47:57.372585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.915 [2024-07-15 22:47:57.384936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.915 [2024-07-15 22:47:57.384966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.915 [2024-07-15 22:47:57.384983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.915 [2024-07-15 22:47:57.398838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.915 [2024-07-15 22:47:57.398891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.915 [2024-07-15 22:47:57.398910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.915 [2024-07-15 22:47:57.411447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:13.915 [2024-07-15 22:47:57.411490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.915 [2024-07-15 22:47:57.411506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.424698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.424742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.424764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.436272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.436301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.436319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.448696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.448730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.448749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.461511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.461541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.461560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.475184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.475215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.475232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.487714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.487744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.487761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.500481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.500508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.500523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.513882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.513925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.513940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.526213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.526243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.526259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.539170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.539214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.539231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.551028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.551057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.551078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.564588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.564632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.564648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.578841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.578894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.578913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.591437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.591467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.591484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.603735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.603762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.603778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.617779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.617810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.617826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.629613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.629643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.629659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.642581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.642609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.642625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.656501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.656531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.656548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.175 [2024-07-15 22:47:57.668498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.175 [2024-07-15 22:47:57.668533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.175 [2024-07-15 22:47:57.668550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.681483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.681515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.681532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.695075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.695105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.695122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.706586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.706616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.706632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.718780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.718810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.718826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.731853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.731891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.731909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.743628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.743658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.743674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.755959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.755990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.756007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.769823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.769853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.769893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.780464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.780496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.780513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.793853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.793905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.793936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.807240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.807270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.807300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.820070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.820100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.820117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.831953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.831981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.831997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.845012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.845043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.845059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.857964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.857994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.858011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.869327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.869354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.869370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.882148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.882191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.882229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.894805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.894835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.894852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.907537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.907568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.907585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.919583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.919612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.919627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.435 [2024-07-15 22:47:57.933352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.435 [2024-07-15 22:47:57.933383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.435 [2024-07-15 22:47:57.933400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:57.948112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:57.948142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:57.948185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:57.958653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:57.958680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:57.958696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:57.972295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:57.972321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:57.972336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:57.985512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:57.985557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:57.985575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:57.997999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:57.998035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:57.998053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.009691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.009721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.009737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.022410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.022440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.022456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.034660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.034703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.034718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.048544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.048584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.048600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.061653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.061683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.061699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.073547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.073576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.073592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.084815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.084844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.084883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.099091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.099122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.099139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.112018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.112049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.112068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.127462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.127497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.127516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.140941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.140972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.140989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.154402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.154437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.154455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.168766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.168802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.696 [2024-07-15 22:47:58.168822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.696 [2024-07-15 22:47:58.180189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.696 [2024-07-15 22:47:58.180223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.697 [2024-07-15 22:47:58.180241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.697 [2024-07-15 22:47:58.195935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.697 [2024-07-15 22:47:58.195965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.195999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.208136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.208165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.208181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.223051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.223082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.223104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.235138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.235167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.235197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.249745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.249780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.249799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.263800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.263834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.263856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.276971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.277001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.277018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.290567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.290600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.290619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.303053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.303084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.303101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.317948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.317978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.317995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.331902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.331948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.331966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.344811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.344845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.344863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.359568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.359602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.359621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.371625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.371659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.371678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.385338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.385372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.385391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.399179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.399209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.399244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.412532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.412566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.412585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.425922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.425958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.425989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.958 [2024-07-15 22:47:58.441021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.958 [2024-07-15 22:47:58.441052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.958 [2024-07-15 22:47:58.441069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.959 [2024-07-15 22:47:58.452648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:14.959 [2024-07-15 22:47:58.452682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.959 [2024-07-15 22:47:58.452706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.466646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.466680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.466699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.481160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.481207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.481226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.493563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.493594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.493610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.506862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.506905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.506946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.520261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.520294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.520312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.534873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.534926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.534943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.547605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.547638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.547657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.561706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.561739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.561757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.574102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.574137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.574154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.588632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.588666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.588684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.602421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.602455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.602473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.218 [2024-07-15 22:47:58.614312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.218 [2024-07-15 22:47:58.614346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.218 [2024-07-15 22:47:58.614365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.219 [2024-07-15 22:47:58.629945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.219 [2024-07-15 22:47:58.629975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.219 [2024-07-15 22:47:58.629992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.219 [2024-07-15 22:47:58.643897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.219 [2024-07-15 22:47:58.643943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.219 [2024-07-15 22:47:58.643960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.219 [2024-07-15 22:47:58.657052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.219 [2024-07-15 22:47:58.657082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.219 [2024-07-15 22:47:58.657098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.219 [2024-07-15 22:47:58.670634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.219 [2024-07-15 22:47:58.670667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.219 [2024-07-15 22:47:58.670685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.219 [2024-07-15 22:47:58.682358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.219 [2024-07-15 22:47:58.682394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.219 [2024-07-15 22:47:58.682414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.219 [2024-07-15 22:47:58.697620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.219 [2024-07-15 22:47:58.697654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.219 [2024-07-15 22:47:58.697672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.219 [2024-07-15 22:47:58.710769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.219 [2024-07-15 22:47:58.710803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.219 [2024-07-15 22:47:58.710821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.478 [2024-07-15 22:47:58.722957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.478 [2024-07-15 22:47:58.723001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.478 [2024-07-15 22:47:58.723016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.478 [2024-07-15 22:47:58.738310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.478 [2024-07-15 22:47:58.738343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.478 [2024-07-15 22:47:58.738362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.478 [2024-07-15 22:47:58.755133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.478 [2024-07-15 22:47:58.755164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.478 [2024-07-15 22:47:58.755180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.478 [2024-07-15 22:47:58.767664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.478 [2024-07-15 22:47:58.767698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.478 [2024-07-15 22:47:58.767716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.478 [2024-07-15 22:47:58.781552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.478 [2024-07-15 22:47:58.781585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.478 [2024-07-15 22:47:58.781603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.478 [2024-07-15 22:47:58.793351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.478 [2024-07-15 22:47:58.793384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.478 [2024-07-15 22:47:58.793402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.478 [2024-07-15 22:47:58.809635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.478 [2024-07-15 22:47:58.809670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.478 [2024-07-15 22:47:58.809695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.478 [2024-07-15 22:47:58.823109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.478 [2024-07-15 22:47:58.823139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.478 [2024-07-15 22:47:58.823171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.478 [2024-07-15 22:47:58.836482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.478 [2024-07-15 22:47:58.836515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.479 [2024-07-15 22:47:58.836533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.479 [2024-07-15 22:47:58.850216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.479 [2024-07-15 22:47:58.850249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.479 [2024-07-15 22:47:58.850268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.479 [2024-07-15 22:47:58.861082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff0d50) 00:24:15.479 [2024-07-15 22:47:58.861112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.479 [2024-07-15 22:47:58.861128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.479 00:24:15.479 Latency(us) 00:24:15.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.479 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:15.479 nvme0n1 : 2.05 19081.69 74.54 0.00 0.00 6565.84 3034.07 45244.11 00:24:15.479 =================================================================================================================== 00:24:15.479 Total : 19081.69 74.54 0.00 0.00 6565.84 3034.07 45244.11 00:24:15.479 0 00:24:15.479 22:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:15.479 22:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:15.479 22:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:15.479 22:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:15.479 | .driver_specific 00:24:15.479 | .nvme_error 00:24:15.479 | .status_code 00:24:15.479 | .command_transient_transport_error' 00:24:15.737 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:24:15.737 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1355834 00:24:15.737 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1355834 ']' 00:24:15.737 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1355834 00:24:15.737 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:24:15.737 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:15.737 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1355834 00:24:15.737 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:15.738 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:15.738 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1355834' 00:24:15.738 killing process with pid 1355834 00:24:15.738 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1355834 00:24:15.738 Received shutdown signal, test time was about 2.000000 seconds 00:24:15.738 00:24:15.738 Latency(us) 00:24:15.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.738 =================================================================================================================== 00:24:15.738 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.738 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1355834 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1356313 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1356313 /var/tmp/bperf.sock 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1356313 ']' 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:15.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:15.996 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.256 [2024-07-15 22:47:59.504381] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:16.256 [2024-07-15 22:47:59.504473] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356313 ] 00:24:16.256 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:16.256 Zero copy mechanism will not be used. 00:24:16.256 [2024-07-15 22:47:59.566659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.256 [2024-07-15 22:47:59.679545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.514 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:16.514 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:24:16.514 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:16.514 22:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:16.772 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:16.772 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:16.772 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.772 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:16.772 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:16.772 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.031 nvme0n1 00:24:17.031 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:17.031 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:17.031 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.031 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:17.031 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:17.031 22:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:17.289 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:17.289 Zero copy mechanism will not be used. 00:24:17.289 Running I/O for 2 seconds... 00:24:17.289 [2024-07-15 22:48:00.614210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.289 [2024-07-15 22:48:00.614292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.289 [2024-07-15 22:48:00.614314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.289 [2024-07-15 22:48:00.629607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.289 [2024-07-15 22:48:00.629649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.289 [2024-07-15 22:48:00.629669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.289 [2024-07-15 22:48:00.645436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.289 [2024-07-15 22:48:00.645472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.289 [2024-07-15 22:48:00.645498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.289 [2024-07-15 22:48:00.661863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.290 [2024-07-15 22:48:00.661925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.290 [2024-07-15 22:48:00.661943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.290 [2024-07-15 22:48:00.678474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.290 [2024-07-15 22:48:00.678508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.290 [2024-07-15 22:48:00.678532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.290 [2024-07-15 22:48:00.694787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.290 [2024-07-15 22:48:00.694821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.290 [2024-07-15 22:48:00.694851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.290 [2024-07-15 22:48:00.711292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.290 [2024-07-15 22:48:00.711327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.290 [2024-07-15 22:48:00.711356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.290 [2024-07-15 22:48:00.727560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.290 [2024-07-15 22:48:00.727594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.290 [2024-07-15 22:48:00.727618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.290 [2024-07-15 22:48:00.743784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.290 [2024-07-15 22:48:00.743818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.290 [2024-07-15 22:48:00.743838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.290 [2024-07-15 22:48:00.760015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.290 [2024-07-15 22:48:00.760044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.290 [2024-07-15 22:48:00.760068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.290 [2024-07-15 22:48:00.776273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.290 [2024-07-15 22:48:00.776300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.290 [2024-07-15 22:48:00.776317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.792648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.792691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.792710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.809030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.809059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.809082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.825705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.825738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.825770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.842040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.842069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.842088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.858720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.858754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.858783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.875043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.875088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.875107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.891467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.891502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.891522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.907882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.907929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.907946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.924485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.924519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.924538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.940785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.940818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.940836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.957121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.957150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.957181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.973512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.973562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.973581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:00.989479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:00.989511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:00.989533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:01.004775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:01.004805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:01.004821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:01.019979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:01.020010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:01.020031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.548 [2024-07-15 22:48:01.035696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.548 [2024-07-15 22:48:01.035727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.548 [2024-07-15 22:48:01.035746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.807 [2024-07-15 22:48:01.050789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.807 [2024-07-15 22:48:01.050835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-07-15 22:48:01.050851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.807 [2024-07-15 22:48:01.065736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.807 [2024-07-15 22:48:01.065781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-07-15 22:48:01.065803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.807 [2024-07-15 22:48:01.081177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.807 [2024-07-15 22:48:01.081209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-07-15 22:48:01.081250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.807 [2024-07-15 22:48:01.097108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.807 [2024-07-15 22:48:01.097152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-07-15 22:48:01.097169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.807 [2024-07-15 22:48:01.112955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.807 [2024-07-15 22:48:01.112985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-07-15 22:48:01.113002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.807 [2024-07-15 22:48:01.129020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.807 [2024-07-15 22:48:01.129063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-07-15 22:48:01.129081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.807 [2024-07-15 22:48:01.145245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.807 [2024-07-15 22:48:01.145293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-07-15 22:48:01.145312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.807 [2024-07-15 22:48:01.161328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.807 [2024-07-15 22:48:01.161362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-07-15 22:48:01.161382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.807 [2024-07-15 22:48:01.177546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.807 [2024-07-15 22:48:01.177580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.808 [2024-07-15 22:48:01.177605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.808 [2024-07-15 22:48:01.193772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.808 [2024-07-15 22:48:01.193805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.808 [2024-07-15 22:48:01.193826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.808 [2024-07-15 22:48:01.210189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.808 [2024-07-15 22:48:01.210249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.808 [2024-07-15 22:48:01.210267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.808 [2024-07-15 22:48:01.226650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.808 [2024-07-15 22:48:01.226685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.808 [2024-07-15 22:48:01.226714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.808 [2024-07-15 22:48:01.243029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.808 [2024-07-15 22:48:01.243074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.808 [2024-07-15 22:48:01.243103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.808 [2024-07-15 22:48:01.259734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.808 [2024-07-15 22:48:01.259768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.808 [2024-07-15 22:48:01.259787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.808 [2024-07-15 22:48:01.276050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.808 [2024-07-15 22:48:01.276080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.808 [2024-07-15 22:48:01.276097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.808 [2024-07-15 22:48:01.292448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:17.808 [2024-07-15 22:48:01.292481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.808 [2024-07-15 22:48:01.292500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.309055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.309084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.309101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.325210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.325259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.325279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.341733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.341766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.341785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.358184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.358230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.358249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.374570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.374603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.374622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.390831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.390865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.390899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.407315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.407349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.407368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.423727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.423762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.423781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.440115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.440159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.440176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.456540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.456574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.456592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.472790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.472824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.472843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.489306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.489340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.489358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.505912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.505966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.505992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.522156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.522189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.522240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.538431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.538463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.538481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.068 [2024-07-15 22:48:01.554505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.068 [2024-07-15 22:48:01.554538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.068 [2024-07-15 22:48:01.554557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.570633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.570667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.570686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.587025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.587054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.587071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.603261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.603295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.603314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.620053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.620082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.620098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.636295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.636330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.636349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.652539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.652573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.652591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.668743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.668785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.668805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.685239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.685293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.685309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.701546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.701580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.701599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.717929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.717960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.717977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.734233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.734266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.734285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.750736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.750769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.750787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.767318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.767352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.767370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.783591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.783624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.783642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.799861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.799901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.799934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.327 [2024-07-15 22:48:01.816221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.327 [2024-07-15 22:48:01.816266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.327 [2024-07-15 22:48:01.816286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.832850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.832894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.832948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.849170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.849214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.849231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.866245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.866279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.866298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.882558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.882592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.882611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.898846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.898887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.898933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.915030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.915060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.915077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.931590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.931624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.931643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.947509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.947542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.947568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.963470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.963502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.963520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.979434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.979467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.979486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:01.995592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:01.995624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:01.995643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:02.011756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:02.011789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.586 [2024-07-15 22:48:02.011807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.586 [2024-07-15 22:48:02.027707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.586 [2024-07-15 22:48:02.027739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.587 [2024-07-15 22:48:02.027757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.587 [2024-07-15 22:48:02.043765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.587 [2024-07-15 22:48:02.043796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.587 [2024-07-15 22:48:02.043813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.587 [2024-07-15 22:48:02.059771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.587 [2024-07-15 22:48:02.059804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.587 [2024-07-15 22:48:02.059822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.587 [2024-07-15 22:48:02.075845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.587 [2024-07-15 22:48:02.075885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.587 [2024-07-15 22:48:02.075920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.849 [2024-07-15 22:48:02.091700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.849 [2024-07-15 22:48:02.091751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.849 [2024-07-15 22:48:02.091770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.107730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.107762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.107779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.123512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.123544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.123562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.139370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.139402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.139420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.155250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.155282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.155300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.171444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.171476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.171494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.187571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.187606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.187626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.204030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.204074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.204091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.220337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.220371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.220395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.236737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.236769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.236787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.253446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.253480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.253499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.269781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.269814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.269833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.286363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.850 [2024-07-15 22:48:02.286397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.850 [2024-07-15 22:48:02.286415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.850 [2024-07-15 22:48:02.302691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.851 [2024-07-15 22:48:02.302728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.851 [2024-07-15 22:48:02.302746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.851 [2024-07-15 22:48:02.319056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.851 [2024-07-15 22:48:02.319086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.851 [2024-07-15 22:48:02.319109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.851 [2024-07-15 22:48:02.335280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:18.851 [2024-07-15 22:48:02.335314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.851 [2024-07-15 22:48:02.335333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.115 [2024-07-15 22:48:02.351289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.115 [2024-07-15 22:48:02.351320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.115 [2024-07-15 22:48:02.351337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.115 [2024-07-15 22:48:02.366236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.115 [2024-07-15 22:48:02.366274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.115 [2024-07-15 22:48:02.366291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.115 [2024-07-15 22:48:02.381287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.115 [2024-07-15 22:48:02.381317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.115 [2024-07-15 22:48:02.381333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.115 [2024-07-15 22:48:02.396606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.115 [2024-07-15 22:48:02.396650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.115 [2024-07-15 22:48:02.396667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.115 [2024-07-15 22:48:02.412352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.115 [2024-07-15 22:48:02.412382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.115 [2024-07-15 22:48:02.412398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.115 [2024-07-15 22:48:02.427432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.115 [2024-07-15 22:48:02.427477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.115 [2024-07-15 22:48:02.427493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.115 [2024-07-15 22:48:02.442617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.115 [2024-07-15 22:48:02.442647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.115 [2024-07-15 22:48:02.442664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.115 [2024-07-15 22:48:02.457555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.115 [2024-07-15 22:48:02.457587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.115 [2024-07-15 22:48:02.457604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.115 [2024-07-15 22:48:02.472342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.116 [2024-07-15 22:48:02.472388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.116 [2024-07-15 22:48:02.472405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.116 [2024-07-15 22:48:02.487340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.116 [2024-07-15 22:48:02.487371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.116 [2024-07-15 22:48:02.487388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.116 [2024-07-15 22:48:02.502590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.116 [2024-07-15 22:48:02.502620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.116 [2024-07-15 22:48:02.502637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.116 [2024-07-15 22:48:02.517508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.116 [2024-07-15 22:48:02.517552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.116 [2024-07-15 22:48:02.517568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.116 [2024-07-15 22:48:02.532701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.116 [2024-07-15 22:48:02.532746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.116 [2024-07-15 22:48:02.532763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.116 [2024-07-15 22:48:02.547818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.116 [2024-07-15 22:48:02.547862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.116 [2024-07-15 22:48:02.547886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.116 [2024-07-15 22:48:02.562828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.116 [2024-07-15 22:48:02.562873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.116 [2024-07-15 22:48:02.562898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.116 [2024-07-15 22:48:02.577948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.116 [2024-07-15 22:48:02.577978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.116 [2024-07-15 22:48:02.577995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.116 [2024-07-15 22:48:02.593066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa3a4f0) 00:24:19.116 [2024-07-15 22:48:02.593097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.116 [2024-07-15 22:48:02.593114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.116 00:24:19.116 Latency(us) 00:24:19.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.116 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:19.116 nvme0n1 : 2.00 1928.75 241.09 0.00 0.00 8290.06 7281.78 16990.81 00:24:19.116 =================================================================================================================== 00:24:19.116 Total : 1928.75 241.09 0.00 0.00 8290.06 7281.78 16990.81 00:24:19.116 0 00:24:19.374 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:19.374 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:19.374 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:19.374 | .driver_specific 00:24:19.374 | .nvme_error 00:24:19.374 | .status_code 00:24:19.374 | .command_transient_transport_error' 00:24:19.374 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 124 > 0 )) 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1356313 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1356313 ']' 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1356313 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1356313 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1356313' 00:24:19.631 killing process with pid 1356313 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1356313 00:24:19.631 Received shutdown signal, test time was about 2.000000 seconds 00:24:19.631 00:24:19.631 Latency(us) 00:24:19.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.631 =================================================================================================================== 00:24:19.631 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.631 22:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1356313 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1356889 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1356889 /var/tmp/bperf.sock 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1356889 ']' 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:19.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:19.889 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.889 [2024-07-15 22:48:03.210124] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:19.889 [2024-07-15 22:48:03.210230] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356889 ] 00:24:19.889 [2024-07-15 22:48:03.271702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.889 [2024-07-15 22:48:03.386636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.146 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:20.146 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:24:20.146 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.146 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.404 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:20.404 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:20.404 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.404 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:20.404 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.404 22:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.972 nvme0n1 00:24:20.972 22:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:20.972 22:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:20.972 22:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.972 22:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:20.972 22:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:20.972 22:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:20.972 Running I/O for 2 seconds... 00:24:20.972 [2024-07-15 22:48:04.395077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f9f68 00:24:20.972 [2024-07-15 22:48:04.396418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.972 [2024-07-15 22:48:04.396461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.972 [2024-07-15 22:48:04.409593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ddc00 00:24:20.972 [2024-07-15 22:48:04.411135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.972 [2024-07-15 22:48:04.411166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.972 [2024-07-15 22:48:04.422605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190dece0 00:24:20.972 [2024-07-15 22:48:04.424105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.972 [2024-07-15 22:48:04.424134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.972 [2024-07-15 22:48:04.435532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190dfdc0 00:24:20.972 [2024-07-15 22:48:04.436980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.972 [2024-07-15 22:48:04.437010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.972 [2024-07-15 22:48:04.448458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e0ea0 00:24:20.972 [2024-07-15 22:48:04.449905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.972 [2024-07-15 22:48:04.449949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.972 [2024-07-15 22:48:04.461573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fc560 00:24:20.972 [2024-07-15 22:48:04.463106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.972 [2024-07-15 22:48:04.463137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.474552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f6458 00:24:21.231 [2024-07-15 22:48:04.476038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.476067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.487395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f5378 00:24:21.231 [2024-07-15 22:48:04.488839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.488870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.500239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f4298 00:24:21.231 [2024-07-15 22:48:04.501635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.501667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.513041] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f31b8 00:24:21.231 [2024-07-15 22:48:04.514481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.514513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.525944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f20d8 00:24:21.231 [2024-07-15 22:48:04.527367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.527398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.538726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f0ff8 00:24:21.231 [2024-07-15 22:48:04.540229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.540262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.551616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190eff18 00:24:21.231 [2024-07-15 22:48:04.553032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.553075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.564454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190eee38 00:24:21.231 [2024-07-15 22:48:04.565931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.565960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.577320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fb8b8 00:24:21.231 [2024-07-15 22:48:04.578737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.578768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.590123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fdeb0 00:24:21.231 [2024-07-15 22:48:04.591578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.591610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.602937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190feb58 00:24:21.231 [2024-07-15 22:48:04.604354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.604386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.615701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fcdd0 00:24:21.231 [2024-07-15 22:48:04.617248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.617279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.628483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190de8a8 00:24:21.231 [2024-07-15 22:48:04.629930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.629958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.641327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190df988 00:24:21.231 [2024-07-15 22:48:04.642754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.642786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.654118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e0a68 00:24:21.231 [2024-07-15 22:48:04.655545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.655578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.666994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e1b48 00:24:21.231 [2024-07-15 22:48:04.668400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.668432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.679732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f6890 00:24:21.231 [2024-07-15 22:48:04.681259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.681291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.692499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f57b0 00:24:21.231 [2024-07-15 22:48:04.693931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.693974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.705317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f46d0 00:24:21.231 [2024-07-15 22:48:04.706749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.706780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.718035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f35f0 00:24:21.231 [2024-07-15 22:48:04.719451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.231 [2024-07-15 22:48:04.719482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.231 [2024-07-15 22:48:04.730797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f2510 00:24:21.490 [2024-07-15 22:48:04.732276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.732307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.743618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f1430 00:24:21.490 [2024-07-15 22:48:04.745110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.745139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.756417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f0350 00:24:21.490 [2024-07-15 22:48:04.757829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.757860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.769194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ef270 00:24:21.490 [2024-07-15 22:48:04.770619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.770650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.782000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ee190 00:24:21.490 [2024-07-15 22:48:04.783418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.783449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.794744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fbcf0 00:24:21.490 [2024-07-15 22:48:04.796344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.796374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.807539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fe2e8 00:24:21.490 [2024-07-15 22:48:04.808969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.808997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.820384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fe720 00:24:21.490 [2024-07-15 22:48:04.821800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.821831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.833146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ddc00 00:24:21.490 [2024-07-15 22:48:04.834608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.834639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.845949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190dece0 00:24:21.490 [2024-07-15 22:48:04.847393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.847424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.858579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190dfdc0 00:24:21.490 [2024-07-15 22:48:04.860055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.860083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.871360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e0ea0 00:24:21.490 [2024-07-15 22:48:04.872781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.872818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.884122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fc560 00:24:21.490 [2024-07-15 22:48:04.885549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.885579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.896853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f6458 00:24:21.490 [2024-07-15 22:48:04.898311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.898341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.909680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f5378 00:24:21.490 [2024-07-15 22:48:04.911135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.911180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.922531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f4298 00:24:21.490 [2024-07-15 22:48:04.923994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.924023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.935347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f31b8 00:24:21.490 [2024-07-15 22:48:04.936710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.936740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.947302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f20d8 00:24:21.490 [2024-07-15 22:48:04.948607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.948635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.959432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f0ff8 00:24:21.490 [2024-07-15 22:48:04.960897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.490 [2024-07-15 22:48:04.960941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.490 [2024-07-15 22:48:04.972296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190eff18 00:24:21.490 [2024-07-15 22:48:04.973729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.491 [2024-07-15 22:48:04.973759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.491 [2024-07-15 22:48:04.985185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190eee38 00:24:21.491 [2024-07-15 22:48:04.986649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.491 [2024-07-15 22:48:04.986678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:04.998109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fb8b8 00:24:21.750 [2024-07-15 22:48:04.999586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:04.999616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.010986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fdeb0 00:24:21.750 [2024-07-15 22:48:05.012411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.012443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.023800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190feb58 00:24:21.750 [2024-07-15 22:48:05.025264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.025306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.036689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fcdd0 00:24:21.750 [2024-07-15 22:48:05.038162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.038215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.049545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190de8a8 00:24:21.750 [2024-07-15 22:48:05.051021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.051048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.062374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190df988 00:24:21.750 [2024-07-15 22:48:05.063822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.063853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.075291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e0a68 00:24:21.750 [2024-07-15 22:48:05.076740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.076772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.088169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e1b48 00:24:21.750 [2024-07-15 22:48:05.089657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.089689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.101073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f6890 00:24:21.750 [2024-07-15 22:48:05.102530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.102560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.113923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f57b0 00:24:21.750 [2024-07-15 22:48:05.115356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.115388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.126722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f46d0 00:24:21.750 [2024-07-15 22:48:05.128181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.128209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.139584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f35f0 00:24:21.750 [2024-07-15 22:48:05.141092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.141119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.152365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f2510 00:24:21.750 [2024-07-15 22:48:05.153799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.153829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.165236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f1430 00:24:21.750 [2024-07-15 22:48:05.166668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.166700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.178058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f0350 00:24:21.750 [2024-07-15 22:48:05.179503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.179535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.190739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ef270 00:24:21.750 [2024-07-15 22:48:05.192274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.192302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.203504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ee190 00:24:21.750 [2024-07-15 22:48:05.204953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.204986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.216285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fbcf0 00:24:21.750 [2024-07-15 22:48:05.217695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.217726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.229018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fe2e8 00:24:21.750 [2024-07-15 22:48:05.230425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.230455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.750 [2024-07-15 22:48:05.241688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fe720 00:24:21.750 [2024-07-15 22:48:05.243249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.750 [2024-07-15 22:48:05.243280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.010 [2024-07-15 22:48:05.254563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ddc00 00:24:22.010 [2024-07-15 22:48:05.255992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.010 [2024-07-15 22:48:05.256020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.010 [2024-07-15 22:48:05.267282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190dece0 00:24:22.010 [2024-07-15 22:48:05.268706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.268738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.280087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190dfdc0 00:24:22.011 [2024-07-15 22:48:05.281515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.281547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.292824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e0ea0 00:24:22.011 [2024-07-15 22:48:05.294292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.294324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.305558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fc560 00:24:22.011 [2024-07-15 22:48:05.307022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.307051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.318403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f6458 00:24:22.011 [2024-07-15 22:48:05.319844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.319882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.331171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f5378 00:24:22.011 [2024-07-15 22:48:05.332576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.332608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.344025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f4298 00:24:22.011 [2024-07-15 22:48:05.345434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.345464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.356786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f31b8 00:24:22.011 [2024-07-15 22:48:05.358322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.358353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.369578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f20d8 00:24:22.011 [2024-07-15 22:48:05.371031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.371059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.382369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f0ff8 00:24:22.011 [2024-07-15 22:48:05.383788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.383819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.395171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190eff18 00:24:22.011 [2024-07-15 22:48:05.396658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.396689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.407984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190eee38 00:24:22.011 [2024-07-15 22:48:05.409395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.409426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.420661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fb8b8 00:24:22.011 [2024-07-15 22:48:05.422132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.422176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.433465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fdeb0 00:24:22.011 [2024-07-15 22:48:05.434931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.434960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.446362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190feb58 00:24:22.011 [2024-07-15 22:48:05.447811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.447843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.459234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fcdd0 00:24:22.011 [2024-07-15 22:48:05.460648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.460681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.472214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190de8a8 00:24:22.011 [2024-07-15 22:48:05.473659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.473691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.485023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190df988 00:24:22.011 [2024-07-15 22:48:05.486441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.486472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.011 [2024-07-15 22:48:05.497756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e0a68 00:24:22.011 [2024-07-15 22:48:05.499229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.011 [2024-07-15 22:48:05.499261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.272 [2024-07-15 22:48:05.510615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e1b48 00:24:22.273 [2024-07-15 22:48:05.512110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.512139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.523470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f6890 00:24:22.273 [2024-07-15 22:48:05.524901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.524945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.536296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f57b0 00:24:22.273 [2024-07-15 22:48:05.537726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.537763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.549126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f46d0 00:24:22.273 [2024-07-15 22:48:05.550560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.550591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.561972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f35f0 00:24:22.273 [2024-07-15 22:48:05.563374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.563405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.574672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f2510 00:24:22.273 [2024-07-15 22:48:05.576136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.576164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.587420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f1430 00:24:22.273 [2024-07-15 22:48:05.588858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.588899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.600224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f0350 00:24:22.273 [2024-07-15 22:48:05.601639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.601670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.613015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ef270 00:24:22.273 [2024-07-15 22:48:05.614445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.614475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.625735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ee190 00:24:22.273 [2024-07-15 22:48:05.627274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.627306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.638532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fbcf0 00:24:22.273 [2024-07-15 22:48:05.639945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.639973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.651402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fe2e8 00:24:22.273 [2024-07-15 22:48:05.652823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.652854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.664202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fe720 00:24:22.273 [2024-07-15 22:48:05.665641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.665672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.676980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ddc00 00:24:22.273 [2024-07-15 22:48:05.678401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.678433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.689743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190dece0 00:24:22.273 [2024-07-15 22:48:05.691275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.691309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.702571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190dfdc0 00:24:22.273 [2024-07-15 22:48:05.704024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.704067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.715430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e0ea0 00:24:22.273 [2024-07-15 22:48:05.716861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.716900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.728225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fc560 00:24:22.273 [2024-07-15 22:48:05.729649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.729680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.741066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f6458 00:24:22.273 [2024-07-15 22:48:05.742495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.742526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.753790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f5378 00:24:22.273 [2024-07-15 22:48:05.755285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.755316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.273 [2024-07-15 22:48:05.766625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f4298 00:24:22.273 [2024-07-15 22:48:05.768111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.273 [2024-07-15 22:48:05.768138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.779424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f31b8 00:24:22.533 [2024-07-15 22:48:05.780860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.780899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.792259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f20d8 00:24:22.533 [2024-07-15 22:48:05.793683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.793714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.805035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f0ff8 00:24:22.533 [2024-07-15 22:48:05.806448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.806480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.817747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190eff18 00:24:22.533 [2024-07-15 22:48:05.819231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.819262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.830565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190eee38 00:24:22.533 [2024-07-15 22:48:05.831996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.832025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.843400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fb8b8 00:24:22.533 [2024-07-15 22:48:05.844804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.844836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.856109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fdeb0 00:24:22.533 [2024-07-15 22:48:05.857577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.857608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.868903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190feb58 00:24:22.533 [2024-07-15 22:48:05.870330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.870366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.881649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fcdd0 00:24:22.533 [2024-07-15 22:48:05.883129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.883157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.894413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190de8a8 00:24:22.533 [2024-07-15 22:48:05.895822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.895854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.907225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190df988 00:24:22.533 [2024-07-15 22:48:05.908661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.908692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.920018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e0a68 00:24:22.533 [2024-07-15 22:48:05.921448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.921491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.932760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e1b48 00:24:22.533 [2024-07-15 22:48:05.934217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.934262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.945551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f6890 00:24:22.533 [2024-07-15 22:48:05.947003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.947032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.958358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f57b0 00:24:22.533 [2024-07-15 22:48:05.959769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.959801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.971151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f46d0 00:24:22.533 [2024-07-15 22:48:05.972581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.972612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.983871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f35f0 00:24:22.533 [2024-07-15 22:48:05.985383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.985415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:05.996557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f2510 00:24:22.533 [2024-07-15 22:48:05.998027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:05.998057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:06.009325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f1430 00:24:22.533 [2024-07-15 22:48:06.010768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:06.010801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.533 [2024-07-15 22:48:06.022198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f0350 00:24:22.533 [2024-07-15 22:48:06.023625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.533 [2024-07-15 22:48:06.023656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.793 [2024-07-15 22:48:06.035052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ef270 00:24:22.794 [2024-07-15 22:48:06.036514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.036545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.047857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ee190 00:24:22.794 [2024-07-15 22:48:06.049359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.049390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.060689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fbcf0 00:24:22.794 [2024-07-15 22:48:06.062192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.062238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.072707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fe2e8 00:24:22.794 [2024-07-15 22:48:06.074008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.074036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.085071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fe720 00:24:22.794 [2024-07-15 22:48:06.086497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.086534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.097993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190ddc00 00:24:22.794 [2024-07-15 22:48:06.099434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.099466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.110886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190dece0 00:24:22.794 [2024-07-15 22:48:06.112385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.112416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.123663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190dfdc0 00:24:22.794 [2024-07-15 22:48:06.125038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.125066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.135783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e0ea0 00:24:22.794 [2024-07-15 22:48:06.137261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.148637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fc560 00:24:22.794 [2024-07-15 22:48:06.150110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.150138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.161580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f6458 00:24:22.794 [2024-07-15 22:48:06.163037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.163066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.174514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f5378 00:24:22.794 [2024-07-15 22:48:06.175990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.176018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.187396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f4298 00:24:22.794 [2024-07-15 22:48:06.188785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.188820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.199527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f31b8 00:24:22.794 [2024-07-15 22:48:06.200894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.200940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.211665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f20d8 00:24:22.794 [2024-07-15 22:48:06.213044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.213074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.223679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f0ff8 00:24:22.794 [2024-07-15 22:48:06.225033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.225062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.235657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190eff18 00:24:22.794 [2024-07-15 22:48:06.236962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.236991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.247626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190eee38 00:24:22.794 [2024-07-15 22:48:06.248948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.248977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.259671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fb8b8 00:24:22.794 [2024-07-15 22:48:06.260992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.261019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.271665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fdeb0 00:24:22.794 [2024-07-15 22:48:06.272960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.272988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.794 [2024-07-15 22:48:06.283640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190feb58 00:24:22.794 [2024-07-15 22:48:06.284995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.794 [2024-07-15 22:48:06.285023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.053 [2024-07-15 22:48:06.295610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190fcdd0 00:24:23.053 [2024-07-15 22:48:06.296987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.053 [2024-07-15 22:48:06.297015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.053 [2024-07-15 22:48:06.307612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190de8a8 00:24:23.053 [2024-07-15 22:48:06.308998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.053 [2024-07-15 22:48:06.309026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.053 [2024-07-15 22:48:06.319607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190df988 00:24:23.053 [2024-07-15 22:48:06.320976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.053 [2024-07-15 22:48:06.321004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.053 [2024-07-15 22:48:06.331568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e0a68 00:24:23.053 [2024-07-15 22:48:06.332907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.053 [2024-07-15 22:48:06.332935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.053 [2024-07-15 22:48:06.343521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190e1b48 00:24:23.053 [2024-07-15 22:48:06.344924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.053 [2024-07-15 22:48:06.344952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.053 [2024-07-15 22:48:06.355614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f6890 00:24:23.053 [2024-07-15 22:48:06.356908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.053 [2024-07-15 22:48:06.356937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.053 [2024-07-15 22:48:06.367572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f57b0 00:24:23.053 [2024-07-15 22:48:06.368985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.053 [2024-07-15 22:48:06.369013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.053 [2024-07-15 22:48:06.379538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1984710) with pdu=0x2000190f46d0 00:24:23.053 [2024-07-15 22:48:06.380904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.053 [2024-07-15 22:48:06.380933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.053 00:24:23.053 Latency(us) 00:24:23.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.053 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:23.053 nvme0n1 : 2.00 20025.95 78.23 0.00 0.00 6381.74 2767.08 16602.45 00:24:23.053 =================================================================================================================== 00:24:23.053 Total : 20025.95 78.23 0.00 0.00 6381.74 2767.08 16602.45 00:24:23.053 0 00:24:23.053 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:23.053 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:23.054 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:23.054 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:23.054 | .driver_specific 00:24:23.054 | .nvme_error 00:24:23.054 | .status_code 00:24:23.054 | .command_transient_transport_error' 00:24:23.312 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:24:23.312 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1356889 00:24:23.312 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1356889 ']' 00:24:23.312 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1356889 00:24:23.312 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:24:23.312 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:23.313 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1356889 00:24:23.313 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:23.313 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:23.313 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1356889' 00:24:23.313 killing process with pid 1356889 00:24:23.313 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1356889 00:24:23.313 Received shutdown signal, test time was about 2.000000 seconds 00:24:23.313 00:24:23.313 Latency(us) 00:24:23.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.313 =================================================================================================================== 00:24:23.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.313 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1356889 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1357293 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1357293 /var/tmp/bperf.sock 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1357293 ']' 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:23.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:23.571 22:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.571 [2024-07-15 22:48:07.009190] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:23.571 [2024-07-15 22:48:07.009300] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357293 ] 00:24:23.571 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:23.571 Zero copy mechanism will not be used. 00:24:23.571 [2024-07-15 22:48:07.068406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.828 [2024-07-15 22:48:07.178523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.828 22:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:23.828 22:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:24:23.828 22:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:23.828 22:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:24.085 22:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:24.085 22:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:24.085 22:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.085 22:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:24.085 22:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.085 22:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.651 nvme0n1 00:24:24.651 22:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:24.651 22:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:24.651 22:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.651 22:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:24.651 22:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:24.651 22:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:24.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:24.651 Zero copy mechanism will not be used. 00:24:24.651 Running I/O for 2 seconds... 00:24:24.651 [2024-07-15 22:48:08.138596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.651 [2024-07-15 22:48:08.138940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.651 [2024-07-15 22:48:08.138979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.155473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.155900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.155950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.174216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.174463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.174513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.193007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.193420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.193454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.213811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.214214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.214249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.233452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.234015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.234044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.253061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.253481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.253514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.273060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.273531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.273563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.290969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.291571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.291604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.309390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.309986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.310028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.327975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.328526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.328559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.346804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.347263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.347305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.365816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.366252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.366280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.381686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.910 [2024-07-15 22:48:08.382085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.910 [2024-07-15 22:48:08.382124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.910 [2024-07-15 22:48:08.397659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:24.911 [2024-07-15 22:48:08.398042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.911 [2024-07-15 22:48:08.398072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.169 [2024-07-15 22:48:08.414772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.169 [2024-07-15 22:48:08.415152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.169 [2024-07-15 22:48:08.415200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.169 [2024-07-15 22:48:08.431778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.169 [2024-07-15 22:48:08.432229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.169 [2024-07-15 22:48:08.432288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.169 [2024-07-15 22:48:08.451020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.169 [2024-07-15 22:48:08.451526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.169 [2024-07-15 22:48:08.451555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.169 [2024-07-15 22:48:08.468299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.169 [2024-07-15 22:48:08.468726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.169 [2024-07-15 22:48:08.468754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.169 [2024-07-15 22:48:08.486687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.169 [2024-07-15 22:48:08.487132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.169 [2024-07-15 22:48:08.487177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.169 [2024-07-15 22:48:08.503832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.169 [2024-07-15 22:48:08.504220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.169 [2024-07-15 22:48:08.504265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.169 [2024-07-15 22:48:08.519942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.170 [2024-07-15 22:48:08.520321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.170 [2024-07-15 22:48:08.520360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.170 [2024-07-15 22:48:08.538092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.170 [2024-07-15 22:48:08.538575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.170 [2024-07-15 22:48:08.538602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.170 [2024-07-15 22:48:08.556129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.170 [2024-07-15 22:48:08.556509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.170 [2024-07-15 22:48:08.556537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.170 [2024-07-15 22:48:08.575239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.170 [2024-07-15 22:48:08.575684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.170 [2024-07-15 22:48:08.575711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.170 [2024-07-15 22:48:08.593740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.170 [2024-07-15 22:48:08.594166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.170 [2024-07-15 22:48:08.594210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.170 [2024-07-15 22:48:08.612170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.170 [2024-07-15 22:48:08.612638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.170 [2024-07-15 22:48:08.612666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.170 [2024-07-15 22:48:08.628119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.170 [2024-07-15 22:48:08.628539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.170 [2024-07-15 22:48:08.628583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.170 [2024-07-15 22:48:08.646550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.170 [2024-07-15 22:48:08.646966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.170 [2024-07-15 22:48:08.647000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.170 [2024-07-15 22:48:08.665023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.170 [2024-07-15 22:48:08.665451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.170 [2024-07-15 22:48:08.665494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.683598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.684024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.684068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.703716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.704370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.704412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.722782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.723227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.723266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.740097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.740467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.740495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.757281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.757670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.757725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.776265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.776650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.776703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.794903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.795293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.795347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.813717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.814132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.814176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.832141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.832387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.832415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.850819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.851367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.851395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.868503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.869050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.869079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.886915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.887404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.887436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.428 [2024-07-15 22:48:08.904464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.428 [2024-07-15 22:48:08.904889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.428 [2024-07-15 22:48:08.904946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.429 [2024-07-15 22:48:08.924780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.429 [2024-07-15 22:48:08.925257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.429 [2024-07-15 22:48:08.925299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:08.943997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:08.944421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:08.944449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:08.962164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:08.962676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:08.962725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:08.979796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:08.980169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:08.980198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:08.997331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:08.997784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:08.997828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:09.017032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:09.017456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:09.017485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:09.035020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:09.035456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:09.035500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:09.052989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:09.053421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:09.053450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:09.071022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:09.071414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:09.071458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:09.089761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:09.090261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:09.090306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:09.107710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:09.108125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:09.108154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:09.127030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:09.127607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:09.127650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:09.145967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:09.146358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:09.146383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:09.163183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:09.163554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:09.163596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.687 [2024-07-15 22:48:09.181626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.687 [2024-07-15 22:48:09.182007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.687 [2024-07-15 22:48:09.182037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.946 [2024-07-15 22:48:09.199741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.946 [2024-07-15 22:48:09.200115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.946 [2024-07-15 22:48:09.200145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.946 [2024-07-15 22:48:09.219301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.946 [2024-07-15 22:48:09.219842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.946 [2024-07-15 22:48:09.219891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.946 [2024-07-15 22:48:09.235981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.236402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.236442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.252213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.252580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.252624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.268094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.268547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.268573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.286524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.286896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.286947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.305268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.305672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.305699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.322758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.323276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.323321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.341459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.341837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.341864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.360301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.360671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.360699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.378429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.378888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.378927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.395417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.395785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.395831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.414591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.415133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.415175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.947 [2024-07-15 22:48:09.432249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:25.947 [2024-07-15 22:48:09.432553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.947 [2024-07-15 22:48:09.432601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.206 [2024-07-15 22:48:09.448585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.206 [2024-07-15 22:48:09.448987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.206 [2024-07-15 22:48:09.449018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.206 [2024-07-15 22:48:09.465262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.206 [2024-07-15 22:48:09.465646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.206 [2024-07-15 22:48:09.465674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.206 [2024-07-15 22:48:09.481361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.206 [2024-07-15 22:48:09.481801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.206 [2024-07-15 22:48:09.481831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.206 [2024-07-15 22:48:09.499839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.500236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.500264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.516680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.517066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.517112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.535738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.536134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.536164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.552709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.553169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.553198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.571033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.571392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.571421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.590117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.590554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.590581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.608212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.608630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.608657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.627604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.627912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.627941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.644646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.645226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.645254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.662088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.662470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.662498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.680883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.681309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.681349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.207 [2024-07-15 22:48:09.699058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.207 [2024-07-15 22:48:09.699565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.207 [2024-07-15 22:48:09.699612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.718215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.718641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.718685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.736929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.737330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.737376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.754753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.755316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.755349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.773947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.774369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.774414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.791411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.791793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.791821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.809979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.810398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.810444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.827952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.828455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.828498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.846888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.847286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.847332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.864939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.865351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.865397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.882387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.882908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.882938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.900357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.900836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.900889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.919937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.920327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.920370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.938802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.939204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.939233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.467 [2024-07-15 22:48:09.957337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.467 [2024-07-15 22:48:09.957770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.467 [2024-07-15 22:48:09.957800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.726 [2024-07-15 22:48:09.977397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.726 [2024-07-15 22:48:09.977911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.726 [2024-07-15 22:48:09.977954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.726 [2024-07-15 22:48:09.996144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.726 [2024-07-15 22:48:09.996579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.726 [2024-07-15 22:48:09.996624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.726 [2024-07-15 22:48:10.015050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.726 [2024-07-15 22:48:10.015477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.726 [2024-07-15 22:48:10.015542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.726 [2024-07-15 22:48:10.032124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.726 [2024-07-15 22:48:10.032578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.726 [2024-07-15 22:48:10.032641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.726 [2024-07-15 22:48:10.048856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.726 [2024-07-15 22:48:10.049278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.726 [2024-07-15 22:48:10.049329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.726 [2024-07-15 22:48:10.064109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.726 [2024-07-15 22:48:10.064485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.726 [2024-07-15 22:48:10.064531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.726 [2024-07-15 22:48:10.081158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.726 [2024-07-15 22:48:10.081614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.726 [2024-07-15 22:48:10.081642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.726 [2024-07-15 22:48:10.099916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.726 [2024-07-15 22:48:10.100457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.726 [2024-07-15 22:48:10.100485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.726 [2024-07-15 22:48:10.118124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b9af0) with pdu=0x2000190fef90 00:24:26.726 [2024-07-15 22:48:10.118552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.726 [2024-07-15 22:48:10.118593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.726 00:24:26.726 Latency(us) 00:24:26.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.726 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:26.726 nvme0n1 : 2.01 1712.78 214.10 0.00 0.00 9315.53 3082.62 20486.07 00:24:26.726 =================================================================================================================== 00:24:26.726 Total : 1712.78 214.10 0.00 0.00 9315.53 3082.62 20486.07 00:24:26.726 0 00:24:26.726 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:26.726 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:26.726 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:26.726 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:26.726 | .driver_specific 00:24:26.726 | .nvme_error 00:24:26.726 | .status_code 00:24:26.726 | .command_transient_transport_error' 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1357293 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1357293 ']' 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1357293 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1357293 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1357293' 00:24:26.984 killing process with pid 1357293 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1357293 00:24:26.984 Received shutdown signal, test time was about 2.000000 seconds 00:24:26.984 00:24:26.984 Latency(us) 00:24:26.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.984 =================================================================================================================== 00:24:26.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.984 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1357293 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1355680 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1355680 ']' 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1355680 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1355680 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1355680' 00:24:27.242 killing process with pid 1355680 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1355680 00:24:27.242 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1355680 00:24:27.500 00:24:27.500 real 0m16.598s 00:24:27.500 user 0m32.887s 00:24:27.500 sys 0m3.972s 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.500 ************************************ 00:24:27.500 END TEST nvmf_digest_error 00:24:27.500 ************************************ 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1136 -- # return 0 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.500 22:48:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.759 rmmod nvme_tcp 00:24:27.759 rmmod nvme_fabrics 00:24:27.759 rmmod nvme_keyring 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1355680 ']' 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1355680 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@942 -- # '[' -z 1355680 ']' 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # kill -0 1355680 00:24:27.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1355680) - No such process 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@969 -- # echo 'Process with pid 1355680 is not found' 00:24:27.759 Process with pid 1355680 is not found 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.759 22:48:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.663 22:48:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:29.663 00:24:29.663 real 0m37.299s 00:24:29.663 user 1m6.040s 00:24:29.663 sys 0m9.302s 00:24:29.663 22:48:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:29.663 22:48:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:29.663 ************************************ 00:24:29.663 END TEST nvmf_digest 00:24:29.663 ************************************ 00:24:29.663 22:48:13 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:29.663 22:48:13 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:29.663 22:48:13 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:29.663 22:48:13 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:29.663 22:48:13 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:29.663 22:48:13 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:29.663 22:48:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:29.663 22:48:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:29.663 ************************************ 00:24:29.663 START TEST nvmf_bdevperf 00:24:29.663 ************************************ 00:24:29.663 22:48:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:29.921 * Looking for test storage... 00:24:29.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:29.921 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:29.922 22:48:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:31.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:31.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:31.822 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:31.822 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:24:31.822 00:24:31.822 --- 10.0.0.2 ping statistics --- 00:24:31.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.822 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:24:31.822 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:24:31.822 00:24:31.823 --- 10.0.0.1 ping statistics --- 00:24:31.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.823 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1360152 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1360152 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@823 -- # '[' -z 1360152 ']' 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:31.823 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:31.823 [2024-07-15 22:48:15.262705] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:31.823 [2024-07-15 22:48:15.262790] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.087 [2024-07-15 22:48:15.334332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:32.087 [2024-07-15 22:48:15.442468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.087 [2024-07-15 22:48:15.442550] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.087 [2024-07-15 22:48:15.442563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.087 [2024-07-15 22:48:15.442575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.087 [2024-07-15 22:48:15.442588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.087 [2024-07-15 22:48:15.442713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.087 [2024-07-15 22:48:15.442788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.087 [2024-07-15 22:48:15.442791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.087 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:32.087 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # return 0 00:24:32.087 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.087 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:32.087 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.087 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.087 22:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.087 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:32.087 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.347 [2024-07-15 22:48:15.589872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.347 Malloc0 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.347 [2024-07-15 22:48:15.648690] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.347 { 00:24:32.347 "params": { 00:24:32.347 "name": "Nvme$subsystem", 00:24:32.347 "trtype": "$TEST_TRANSPORT", 00:24:32.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.347 "adrfam": "ipv4", 00:24:32.347 "trsvcid": "$NVMF_PORT", 00:24:32.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.347 "hdgst": ${hdgst:-false}, 00:24:32.347 "ddgst": ${ddgst:-false} 00:24:32.347 }, 00:24:32.347 "method": "bdev_nvme_attach_controller" 00:24:32.347 } 00:24:32.347 EOF 00:24:32.347 )") 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:32.347 22:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:32.347 "params": { 00:24:32.347 "name": "Nvme1", 00:24:32.347 "trtype": "tcp", 00:24:32.347 "traddr": "10.0.0.2", 00:24:32.347 "adrfam": "ipv4", 00:24:32.347 "trsvcid": "4420", 00:24:32.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.347 "hdgst": false, 00:24:32.347 "ddgst": false 00:24:32.347 }, 00:24:32.347 "method": "bdev_nvme_attach_controller" 00:24:32.347 }' 00:24:32.347 [2024-07-15 22:48:15.697743] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:32.347 [2024-07-15 22:48:15.697820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360298 ] 00:24:32.347 [2024-07-15 22:48:15.756284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.605 [2024-07-15 22:48:15.868813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.861 Running I/O for 1 seconds... 00:24:33.795 00:24:33.795 Latency(us) 00:24:33.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.795 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:33.795 Verification LBA range: start 0x0 length 0x4000 00:24:33.795 Nvme1n1 : 1.01 8859.67 34.61 0.00 0.00 14390.86 3082.62 14369.37 00:24:33.795 =================================================================================================================== 00:24:33.795 Total : 8859.67 34.61 0.00 0.00 14390.86 3082.62 14369.37 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1360441 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:34.053 { 00:24:34.053 "params": { 00:24:34.053 "name": "Nvme$subsystem", 00:24:34.053 "trtype": "$TEST_TRANSPORT", 00:24:34.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:34.053 "adrfam": "ipv4", 00:24:34.053 "trsvcid": "$NVMF_PORT", 00:24:34.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:34.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:34.053 "hdgst": ${hdgst:-false}, 00:24:34.053 "ddgst": ${ddgst:-false} 00:24:34.053 }, 00:24:34.053 "method": "bdev_nvme_attach_controller" 00:24:34.053 } 00:24:34.053 EOF 00:24:34.053 )") 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:34.053 22:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:34.053 "params": { 00:24:34.053 "name": "Nvme1", 00:24:34.053 "trtype": "tcp", 00:24:34.053 "traddr": "10.0.0.2", 00:24:34.053 "adrfam": "ipv4", 00:24:34.053 "trsvcid": "4420", 00:24:34.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.053 "hdgst": false, 00:24:34.053 "ddgst": false 00:24:34.053 }, 00:24:34.053 "method": "bdev_nvme_attach_controller" 00:24:34.053 }' 00:24:34.053 [2024-07-15 22:48:17.454098] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:34.053 [2024-07-15 22:48:17.454198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360441 ] 00:24:34.053 [2024-07-15 22:48:17.515475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.312 [2024-07-15 22:48:17.624783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.570 Running I/O for 15 seconds... 00:24:37.105 22:48:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1360152 00:24:37.105 22:48:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:37.105 [2024-07-15 22:48:20.421334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.421976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.421989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.422005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.422018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.422034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.422047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.422062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.422075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.422090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.422103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.422118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.422131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.422146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.422175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.422193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.105 [2024-07-15 22:48:20.422212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.105 [2024-07-15 22:48:20.422229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.422975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.422988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.106 [2024-07-15 22:48:20.423374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.106 [2024-07-15 22:48:20.423405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.106 [2024-07-15 22:48:20.423440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.106 [2024-07-15 22:48:20.423472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.106 [2024-07-15 22:48:20.423599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.106 [2024-07-15 22:48:20.423615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.423973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.423988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.107 [2024-07-15 22:48:20.424966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.107 [2024-07-15 22:48:20.424979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.424995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.108 [2024-07-15 22:48:20.425532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26cb4c0 is same with the state(5) to be set 00:24:37.108 [2024-07-15 22:48:20.425567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:37.108 [2024-07-15 22:48:20.425580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:37.108 [2024-07-15 22:48:20.425593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54432 len:8 PRP1 0x0 PRP2 0x0 00:24:37.108 [2024-07-15 22:48:20.425607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.108 [2024-07-15 22:48:20.425677] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26cb4c0 was disconnected and freed. reset controller. 00:24:37.108 [2024-07-15 22:48:20.429572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-15 22:48:20.429647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.108 [2024-07-15 22:48:20.430385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-15 22:48:20.430419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-15 22:48:20.430437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.108 [2024-07-15 22:48:20.430676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.108 [2024-07-15 22:48:20.430945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-15 22:48:20.430968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-15 22:48:20.430984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-15 22:48:20.434509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-15 22:48:20.443764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-15 22:48:20.444252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-15 22:48:20.444284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-15 22:48:20.444302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.108 [2024-07-15 22:48:20.444539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.108 [2024-07-15 22:48:20.444780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-15 22:48:20.444803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-15 22:48:20.444818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-15 22:48:20.448379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-15 22:48:20.457597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-15 22:48:20.458082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-15 22:48:20.458109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-15 22:48:20.458124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.108 [2024-07-15 22:48:20.458336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.108 [2024-07-15 22:48:20.458594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-15 22:48:20.458617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-15 22:48:20.458632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-15 22:48:20.462211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-15 22:48:20.471641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-15 22:48:20.472108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-15 22:48:20.472139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-15 22:48:20.472157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.108 [2024-07-15 22:48:20.472393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.108 [2024-07-15 22:48:20.472633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-15 22:48:20.472656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-15 22:48:20.472671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-15 22:48:20.476231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-15 22:48:20.485466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-15 22:48:20.485931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-15 22:48:20.485963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-15 22:48:20.485980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.108 [2024-07-15 22:48:20.486217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.108 [2024-07-15 22:48:20.486458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-15 22:48:20.486483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.108 [2024-07-15 22:48:20.486498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.108 [2024-07-15 22:48:20.490058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.108 [2024-07-15 22:48:20.499458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.108 [2024-07-15 22:48:20.499915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-15 22:48:20.499947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.108 [2024-07-15 22:48:20.499965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.108 [2024-07-15 22:48:20.500206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.108 [2024-07-15 22:48:20.500447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.108 [2024-07-15 22:48:20.500471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.109 [2024-07-15 22:48:20.500486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.109 [2024-07-15 22:48:20.504049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.109 [2024-07-15 22:48:20.513477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.109 [2024-07-15 22:48:20.513914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-15 22:48:20.513945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.109 [2024-07-15 22:48:20.513963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.109 [2024-07-15 22:48:20.514199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.109 [2024-07-15 22:48:20.514440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.109 [2024-07-15 22:48:20.514463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.109 [2024-07-15 22:48:20.514478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.109 [2024-07-15 22:48:20.518042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.109 [2024-07-15 22:48:20.527478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.109 [2024-07-15 22:48:20.527929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-15 22:48:20.527960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.109 [2024-07-15 22:48:20.527978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.109 [2024-07-15 22:48:20.528214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.109 [2024-07-15 22:48:20.528455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.109 [2024-07-15 22:48:20.528478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.109 [2024-07-15 22:48:20.528493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.109 [2024-07-15 22:48:20.532052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.109 [2024-07-15 22:48:20.541488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.109 [2024-07-15 22:48:20.541945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-15 22:48:20.541977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.109 [2024-07-15 22:48:20.541994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.109 [2024-07-15 22:48:20.542231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.109 [2024-07-15 22:48:20.542472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.109 [2024-07-15 22:48:20.542495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.109 [2024-07-15 22:48:20.542515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.109 [2024-07-15 22:48:20.546074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.109 [2024-07-15 22:48:20.555501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.109 [2024-07-15 22:48:20.555944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-15 22:48:20.555975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.109 [2024-07-15 22:48:20.555992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.109 [2024-07-15 22:48:20.556229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.109 [2024-07-15 22:48:20.556470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.109 [2024-07-15 22:48:20.556493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.109 [2024-07-15 22:48:20.556508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.109 [2024-07-15 22:48:20.560071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.109 [2024-07-15 22:48:20.569509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.109 [2024-07-15 22:48:20.569964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-15 22:48:20.569995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.109 [2024-07-15 22:48:20.570012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.109 [2024-07-15 22:48:20.570250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.109 [2024-07-15 22:48:20.570491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.109 [2024-07-15 22:48:20.570514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.109 [2024-07-15 22:48:20.570529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.109 [2024-07-15 22:48:20.574086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.109 [2024-07-15 22:48:20.583515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.109 [2024-07-15 22:48:20.583971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-15 22:48:20.584002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.109 [2024-07-15 22:48:20.584019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.109 [2024-07-15 22:48:20.584256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.109 [2024-07-15 22:48:20.584497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.109 [2024-07-15 22:48:20.584520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.109 [2024-07-15 22:48:20.584535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.109 [2024-07-15 22:48:20.588094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.109 [2024-07-15 22:48:20.597524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.109 [2024-07-15 22:48:20.597973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.110 [2024-07-15 22:48:20.598010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.110 [2024-07-15 22:48:20.598028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.110 [2024-07-15 22:48:20.598264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.110 [2024-07-15 22:48:20.598505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.110 [2024-07-15 22:48:20.598528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.110 [2024-07-15 22:48:20.598543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.110 [2024-07-15 22:48:20.602104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-15 22:48:20.611534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-15 22:48:20.611986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-15 22:48:20.612018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-15 22:48:20.612036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.368 [2024-07-15 22:48:20.612273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.368 [2024-07-15 22:48:20.612521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-15 22:48:20.612545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-15 22:48:20.612560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-15 22:48:20.616121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-15 22:48:20.625549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-15 22:48:20.625982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-15 22:48:20.626014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-15 22:48:20.626032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.368 [2024-07-15 22:48:20.626268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.368 [2024-07-15 22:48:20.626509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-15 22:48:20.626532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-15 22:48:20.626547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.368 [2024-07-15 22:48:20.630108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.368 [2024-07-15 22:48:20.639538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.368 [2024-07-15 22:48:20.639987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.368 [2024-07-15 22:48:20.640018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.368 [2024-07-15 22:48:20.640035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.368 [2024-07-15 22:48:20.640272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.368 [2024-07-15 22:48:20.640518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.368 [2024-07-15 22:48:20.640542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.368 [2024-07-15 22:48:20.640557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.644117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.653551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.654014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.654045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.654063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.654299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.654540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-15 22:48:20.654563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-15 22:48:20.654578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.658137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.667361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.667814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.667845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.667862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.668107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.668348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-15 22:48:20.668371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-15 22:48:20.668386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.671946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.681186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.681637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.681667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.681684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.681933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.682174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-15 22:48:20.682198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-15 22:48:20.682212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.685774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.695037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.695503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.695533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.695550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.695787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.696038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-15 22:48:20.696062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-15 22:48:20.696077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.699631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.708868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.709329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.709359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.709376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.709613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.709853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-15 22:48:20.709884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-15 22:48:20.709902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.713454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.722689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.723163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.723195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.723212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.723449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.723690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-15 22:48:20.723713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-15 22:48:20.723728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.727289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.736517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.736983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.737014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.737037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.737274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.737514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-15 22:48:20.737537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-15 22:48:20.737552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.741112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.750340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.750957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.750988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.751006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.751243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.751484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-15 22:48:20.751507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-15 22:48:20.751521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.755084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.764314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.764782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.764812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.764830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.765077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.765319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-15 22:48:20.765342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-15 22:48:20.765357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.768914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.778144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.778576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.778607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.778624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.778861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.779113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.369 [2024-07-15 22:48:20.779143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.369 [2024-07-15 22:48:20.779158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.369 [2024-07-15 22:48:20.782711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.369 [2024-07-15 22:48:20.792152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.369 [2024-07-15 22:48:20.792600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.369 [2024-07-15 22:48:20.792631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.369 [2024-07-15 22:48:20.792648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.369 [2024-07-15 22:48:20.792894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.369 [2024-07-15 22:48:20.793135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.370 [2024-07-15 22:48:20.793158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.370 [2024-07-15 22:48:20.793174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.370 [2024-07-15 22:48:20.796725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.370 [2024-07-15 22:48:20.806169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.370 [2024-07-15 22:48:20.806618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.370 [2024-07-15 22:48:20.806649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.370 [2024-07-15 22:48:20.806666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.370 [2024-07-15 22:48:20.806913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.370 [2024-07-15 22:48:20.807154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.370 [2024-07-15 22:48:20.807177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.370 [2024-07-15 22:48:20.807192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.370 [2024-07-15 22:48:20.810747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.370 [2024-07-15 22:48:20.819984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.370 [2024-07-15 22:48:20.820435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.370 [2024-07-15 22:48:20.820466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.370 [2024-07-15 22:48:20.820484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.370 [2024-07-15 22:48:20.820720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.370 [2024-07-15 22:48:20.820972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.370 [2024-07-15 22:48:20.820996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.370 [2024-07-15 22:48:20.821010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.370 [2024-07-15 22:48:20.824565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.370 [2024-07-15 22:48:20.833806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.370 [2024-07-15 22:48:20.834241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.370 [2024-07-15 22:48:20.834272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.370 [2024-07-15 22:48:20.834290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.370 [2024-07-15 22:48:20.834526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.370 [2024-07-15 22:48:20.834768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.370 [2024-07-15 22:48:20.834790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.370 [2024-07-15 22:48:20.834805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.370 [2024-07-15 22:48:20.838367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.370 [2024-07-15 22:48:20.847803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.370 [2024-07-15 22:48:20.848280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.370 [2024-07-15 22:48:20.848311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.370 [2024-07-15 22:48:20.848328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.370 [2024-07-15 22:48:20.848564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.370 [2024-07-15 22:48:20.848805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.370 [2024-07-15 22:48:20.848828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.370 [2024-07-15 22:48:20.848843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.370 [2024-07-15 22:48:20.852401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.370 [2024-07-15 22:48:20.861635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.370 [2024-07-15 22:48:20.862110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.370 [2024-07-15 22:48:20.862141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.370 [2024-07-15 22:48:20.862158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.370 [2024-07-15 22:48:20.862394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.370 [2024-07-15 22:48:20.862635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.370 [2024-07-15 22:48:20.862659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.370 [2024-07-15 22:48:20.862675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.370 [2024-07-15 22:48:20.866242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.630 [2024-07-15 22:48:20.875490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.630 [2024-07-15 22:48:20.876003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.630 [2024-07-15 22:48:20.876035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.630 [2024-07-15 22:48:20.876053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.630 [2024-07-15 22:48:20.876296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.630 [2024-07-15 22:48:20.876537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.630 [2024-07-15 22:48:20.876560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.630 [2024-07-15 22:48:20.876575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.630 [2024-07-15 22:48:20.880140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.630 [2024-07-15 22:48:20.889388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.630 [2024-07-15 22:48:20.889848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.630 [2024-07-15 22:48:20.889886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.630 [2024-07-15 22:48:20.889905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.630 [2024-07-15 22:48:20.890142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.630 [2024-07-15 22:48:20.890383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.630 [2024-07-15 22:48:20.890406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.630 [2024-07-15 22:48:20.890423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.630 [2024-07-15 22:48:20.893990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.630 [2024-07-15 22:48:20.903256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.630 [2024-07-15 22:48:20.903717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.630 [2024-07-15 22:48:20.903748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.630 [2024-07-15 22:48:20.903765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.630 [2024-07-15 22:48:20.904011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.630 [2024-07-15 22:48:20.904253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.630 [2024-07-15 22:48:20.904276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.630 [2024-07-15 22:48:20.904291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.630 [2024-07-15 22:48:20.907848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.630 [2024-07-15 22:48:20.917101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.630 [2024-07-15 22:48:20.917717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.630 [2024-07-15 22:48:20.917769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.630 [2024-07-15 22:48:20.917786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.630 [2024-07-15 22:48:20.918033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.630 [2024-07-15 22:48:20.918274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.630 [2024-07-15 22:48:20.918298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.630 [2024-07-15 22:48:20.918318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.630 [2024-07-15 22:48:20.921888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.630 [2024-07-15 22:48:20.930932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.630 [2024-07-15 22:48:20.931384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.630 [2024-07-15 22:48:20.931415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.630 [2024-07-15 22:48:20.931432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.630 [2024-07-15 22:48:20.931669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.630 [2024-07-15 22:48:20.931924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.630 [2024-07-15 22:48:20.931948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.630 [2024-07-15 22:48:20.931963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.630 [2024-07-15 22:48:20.935519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.630 [2024-07-15 22:48:20.944766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.630 [2024-07-15 22:48:20.945180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.630 [2024-07-15 22:48:20.945212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.630 [2024-07-15 22:48:20.945230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.630 [2024-07-15 22:48:20.945467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.630 [2024-07-15 22:48:20.945708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:20.945731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:20.945746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:20.949318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:20.958772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:20.959244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:20.959276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:20.959294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:20.959530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:20.959771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:20.959794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:20.959809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:20.963385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:20.972632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:20.973095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:20.973126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:20.973143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:20.973380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:20.973621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:20.973645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:20.973660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:20.977228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:20.986476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:20.986926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:20.986957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:20.986975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:20.987211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:20.987452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:20.987475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:20.987490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:20.991057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:21.000498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:21.000924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:21.000956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:21.000973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:21.001209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:21.001450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:21.001474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:21.001489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:21.005049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:21.014488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:21.014962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:21.014993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:21.015011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:21.015247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:21.015499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:21.015522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:21.015537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:21.019100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:21.028324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:21.028852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:21.028890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:21.028909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:21.029145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:21.029387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:21.029410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:21.029425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:21.032985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:21.042219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:21.042806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:21.042864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:21.042889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:21.043128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:21.043369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:21.043392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:21.043407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:21.046974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:21.056201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:21.056627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:21.056659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:21.056677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:21.056932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:21.057174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:21.057198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:21.057213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:21.060773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:21.070042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:21.070494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:21.070525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:21.070542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:21.070779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:21.071031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:21.071056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:21.071071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:21.074721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:21.083972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:21.084404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:21.084436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:21.084453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:21.084690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:21.084943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.631 [2024-07-15 22:48:21.084967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.631 [2024-07-15 22:48:21.084982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.631 [2024-07-15 22:48:21.088530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.631 [2024-07-15 22:48:21.097969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.631 [2024-07-15 22:48:21.098422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.631 [2024-07-15 22:48:21.098452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.631 [2024-07-15 22:48:21.098470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.631 [2024-07-15 22:48:21.098706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.631 [2024-07-15 22:48:21.098958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.632 [2024-07-15 22:48:21.098982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.632 [2024-07-15 22:48:21.098998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.632 [2024-07-15 22:48:21.102548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.632 [2024-07-15 22:48:21.111989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.632 [2024-07-15 22:48:21.112443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.632 [2024-07-15 22:48:21.112479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.632 [2024-07-15 22:48:21.112497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.632 [2024-07-15 22:48:21.112733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.632 [2024-07-15 22:48:21.112987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.632 [2024-07-15 22:48:21.113011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.632 [2024-07-15 22:48:21.113026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.632 [2024-07-15 22:48:21.116575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.632 [2024-07-15 22:48:21.125817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.632 [2024-07-15 22:48:21.126257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.632 [2024-07-15 22:48:21.126287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.632 [2024-07-15 22:48:21.126304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.632 [2024-07-15 22:48:21.126541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.632 [2024-07-15 22:48:21.126783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.632 [2024-07-15 22:48:21.126805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.632 [2024-07-15 22:48:21.126820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.893 [2024-07-15 22:48:21.130393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.893 [2024-07-15 22:48:21.139703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.893 [2024-07-15 22:48:21.140172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.893 [2024-07-15 22:48:21.140203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.893 [2024-07-15 22:48:21.140221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.893 [2024-07-15 22:48:21.140457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.893 [2024-07-15 22:48:21.140699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.893 [2024-07-15 22:48:21.140722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.893 [2024-07-15 22:48:21.140737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.893 [2024-07-15 22:48:21.144303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.893 [2024-07-15 22:48:21.153547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.893 [2024-07-15 22:48:21.154000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.893 [2024-07-15 22:48:21.154032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.893 [2024-07-15 22:48:21.154049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.893 [2024-07-15 22:48:21.154286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.893 [2024-07-15 22:48:21.154533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.893 [2024-07-15 22:48:21.154557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.893 [2024-07-15 22:48:21.154572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.893 [2024-07-15 22:48:21.158141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.893 [2024-07-15 22:48:21.167383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.893 [2024-07-15 22:48:21.167807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.893 [2024-07-15 22:48:21.167838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.893 [2024-07-15 22:48:21.167855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.893 [2024-07-15 22:48:21.168101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.893 [2024-07-15 22:48:21.168343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.893 [2024-07-15 22:48:21.168366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.893 [2024-07-15 22:48:21.168381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.893 [2024-07-15 22:48:21.171944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.893 [2024-07-15 22:48:21.181389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.893 [2024-07-15 22:48:21.181824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.893 [2024-07-15 22:48:21.181854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.893 [2024-07-15 22:48:21.181871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.893 [2024-07-15 22:48:21.182118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.893 [2024-07-15 22:48:21.182359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.893 [2024-07-15 22:48:21.182382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.893 [2024-07-15 22:48:21.182397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.893 [2024-07-15 22:48:21.185964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.893 [2024-07-15 22:48:21.195413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.893 [2024-07-15 22:48:21.195920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.893 [2024-07-15 22:48:21.195951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.893 [2024-07-15 22:48:21.195968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.893 [2024-07-15 22:48:21.196204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.893 [2024-07-15 22:48:21.196446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.893 [2024-07-15 22:48:21.196469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.893 [2024-07-15 22:48:21.196484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.893 [2024-07-15 22:48:21.200055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.893 [2024-07-15 22:48:21.209329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.893 [2024-07-15 22:48:21.209788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.893 [2024-07-15 22:48:21.209819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.893 [2024-07-15 22:48:21.209836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.893 [2024-07-15 22:48:21.210083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.893 [2024-07-15 22:48:21.210326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.893 [2024-07-15 22:48:21.210349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.893 [2024-07-15 22:48:21.210364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.893 [2024-07-15 22:48:21.213947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.223190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.223620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.223651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.223668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.223915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.224156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.224179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.224195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.894 [2024-07-15 22:48:21.227752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.237203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.237653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.237684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.237701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.237947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.238188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.238211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.238226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.894 [2024-07-15 22:48:21.241778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.251025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.251475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.251506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.251529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.251766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.252017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.252041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.252056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.894 [2024-07-15 22:48:21.255612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.264864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.265320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.265351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.265368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.265605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.265845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.265868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.265894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.894 [2024-07-15 22:48:21.269451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.278891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.279329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.279359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.279377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.279613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.279853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.279885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.279903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.894 [2024-07-15 22:48:21.283455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.292910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.293363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.293394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.293412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.293649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.293902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.293931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.293948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.894 [2024-07-15 22:48:21.297499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.306743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.307207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.307239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.307256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.307492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.307733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.307767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.307782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.894 [2024-07-15 22:48:21.311354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.320596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.321062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.321093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.321110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.321347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.321588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.321611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.321626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.894 [2024-07-15 22:48:21.325192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.334435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.334872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.334911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.334928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.335165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.335405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.335428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.335443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.894 [2024-07-15 22:48:21.339005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.348454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.348936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.349005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.349023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.349259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.349500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.349523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.349538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.894 [2024-07-15 22:48:21.353105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.894 [2024-07-15 22:48:21.362343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.894 [2024-07-15 22:48:21.362766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.894 [2024-07-15 22:48:21.362797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.894 [2024-07-15 22:48:21.362815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.894 [2024-07-15 22:48:21.363061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.894 [2024-07-15 22:48:21.363303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.894 [2024-07-15 22:48:21.363326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.894 [2024-07-15 22:48:21.363341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.895 [2024-07-15 22:48:21.366901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.895 [2024-07-15 22:48:21.376337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.895 [2024-07-15 22:48:21.376789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.895 [2024-07-15 22:48:21.376819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.895 [2024-07-15 22:48:21.376836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.895 [2024-07-15 22:48:21.377083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.895 [2024-07-15 22:48:21.377324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.895 [2024-07-15 22:48:21.377347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.895 [2024-07-15 22:48:21.377361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.895 [2024-07-15 22:48:21.380920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.895 [2024-07-15 22:48:21.390364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.895 [2024-07-15 22:48:21.390818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.895 [2024-07-15 22:48:21.390848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:37.895 [2024-07-15 22:48:21.390865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:37.895 [2024-07-15 22:48:21.391120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:37.895 [2024-07-15 22:48:21.391362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.895 [2024-07-15 22:48:21.391385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.895 [2024-07-15 22:48:21.391400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-07-15 22:48:21.394965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-07-15 22:48:21.404200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-07-15 22:48:21.404653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-07-15 22:48:21.404684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.157 [2024-07-15 22:48:21.404700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.157 [2024-07-15 22:48:21.404950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.157 [2024-07-15 22:48:21.405191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.157 [2024-07-15 22:48:21.405215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.157 [2024-07-15 22:48:21.405230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-07-15 22:48:21.408781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-07-15 22:48:21.418019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-07-15 22:48:21.418448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-07-15 22:48:21.418478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.157 [2024-07-15 22:48:21.418495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.157 [2024-07-15 22:48:21.418731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.157 [2024-07-15 22:48:21.418983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.157 [2024-07-15 22:48:21.419007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.157 [2024-07-15 22:48:21.419022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-07-15 22:48:21.422571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-07-15 22:48:21.432020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-07-15 22:48:21.432450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-07-15 22:48:21.432481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.157 [2024-07-15 22:48:21.432499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.157 [2024-07-15 22:48:21.432735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.157 [2024-07-15 22:48:21.432998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.157 [2024-07-15 22:48:21.433023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.157 [2024-07-15 22:48:21.433043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-07-15 22:48:21.436603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-07-15 22:48:21.445850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-07-15 22:48:21.446323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-07-15 22:48:21.446353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.157 [2024-07-15 22:48:21.446371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.157 [2024-07-15 22:48:21.446608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.157 [2024-07-15 22:48:21.446849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.157 [2024-07-15 22:48:21.446872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.157 [2024-07-15 22:48:21.446898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-07-15 22:48:21.450532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-07-15 22:48:21.459772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-07-15 22:48:21.460213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-07-15 22:48:21.460243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.157 [2024-07-15 22:48:21.460261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.157 [2024-07-15 22:48:21.460497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.157 [2024-07-15 22:48:21.460738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.157 [2024-07-15 22:48:21.460761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.157 [2024-07-15 22:48:21.460776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-07-15 22:48:21.464347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-07-15 22:48:21.473799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-07-15 22:48:21.474258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-07-15 22:48:21.474288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.157 [2024-07-15 22:48:21.474306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.157 [2024-07-15 22:48:21.474542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.157 [2024-07-15 22:48:21.474783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.157 [2024-07-15 22:48:21.474807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.157 [2024-07-15 22:48:21.474821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-07-15 22:48:21.478388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-07-15 22:48:21.487614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-07-15 22:48:21.488046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-07-15 22:48:21.488083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.157 [2024-07-15 22:48:21.488101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.157 [2024-07-15 22:48:21.488338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.157 [2024-07-15 22:48:21.488579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.157 [2024-07-15 22:48:21.488602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.157 [2024-07-15 22:48:21.488617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-07-15 22:48:21.492181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-07-15 22:48:21.501629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-07-15 22:48:21.502091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-07-15 22:48:21.502122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.157 [2024-07-15 22:48:21.502139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.157 [2024-07-15 22:48:21.502376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.157 [2024-07-15 22:48:21.502617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.157 [2024-07-15 22:48:21.502640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.157 [2024-07-15 22:48:21.502655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-07-15 22:48:21.506214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-07-15 22:48:21.515647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-07-15 22:48:21.516083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.157 [2024-07-15 22:48:21.516114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.157 [2024-07-15 22:48:21.516132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.157 [2024-07-15 22:48:21.516368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.157 [2024-07-15 22:48:21.516609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.157 [2024-07-15 22:48:21.516632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.157 [2024-07-15 22:48:21.516647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.157 [2024-07-15 22:48:21.520211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.157 [2024-07-15 22:48:21.529645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.157 [2024-07-15 22:48:21.530113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.158 [2024-07-15 22:48:21.530144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.158 [2024-07-15 22:48:21.530161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.158 [2024-07-15 22:48:21.530397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.158 [2024-07-15 22:48:21.530644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.158 [2024-07-15 22:48:21.530667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.158 [2024-07-15 22:48:21.530683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.158 [2024-07-15 22:48:21.534245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.158 [2024-07-15 22:48:21.543472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.158 [2024-07-15 22:48:21.543987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.158 [2024-07-15 22:48:21.544019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.158 [2024-07-15 22:48:21.544036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.158 [2024-07-15 22:48:21.544273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.158 [2024-07-15 22:48:21.544514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.158 [2024-07-15 22:48:21.544537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.158 [2024-07-15 22:48:21.544552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.158 [2024-07-15 22:48:21.548118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.158 [2024-07-15 22:48:21.557347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.158 [2024-07-15 22:48:21.557808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.158 [2024-07-15 22:48:21.557839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.158 [2024-07-15 22:48:21.557856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.158 [2024-07-15 22:48:21.558101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.158 [2024-07-15 22:48:21.558342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.158 [2024-07-15 22:48:21.558365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.158 [2024-07-15 22:48:21.558380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.158 [2024-07-15 22:48:21.561943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.158 [2024-07-15 22:48:21.571171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.158 [2024-07-15 22:48:21.571631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.158 [2024-07-15 22:48:21.571662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.158 [2024-07-15 22:48:21.571679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.158 [2024-07-15 22:48:21.571927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.158 [2024-07-15 22:48:21.572168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.158 [2024-07-15 22:48:21.572191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.158 [2024-07-15 22:48:21.572206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.158 [2024-07-15 22:48:21.575763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.158 [2024-07-15 22:48:21.584999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.158 [2024-07-15 22:48:21.585449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.158 [2024-07-15 22:48:21.585479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.158 [2024-07-15 22:48:21.585496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.158 [2024-07-15 22:48:21.585732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.158 [2024-07-15 22:48:21.585985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.158 [2024-07-15 22:48:21.586009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.158 [2024-07-15 22:48:21.586024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.158 [2024-07-15 22:48:21.589578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.158 [2024-07-15 22:48:21.598806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.158 [2024-07-15 22:48:21.599250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.158 [2024-07-15 22:48:21.599280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.158 [2024-07-15 22:48:21.599298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.158 [2024-07-15 22:48:21.599533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.158 [2024-07-15 22:48:21.599774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.158 [2024-07-15 22:48:21.599797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.158 [2024-07-15 22:48:21.599812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.158 [2024-07-15 22:48:21.603376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.158 [2024-07-15 22:48:21.612815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.158 [2024-07-15 22:48:21.613273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.158 [2024-07-15 22:48:21.613304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.158 [2024-07-15 22:48:21.613322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.158 [2024-07-15 22:48:21.613558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.158 [2024-07-15 22:48:21.613798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.158 [2024-07-15 22:48:21.613821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.158 [2024-07-15 22:48:21.613836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.158 [2024-07-15 22:48:21.617399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.158 [2024-07-15 22:48:21.626835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.158 [2024-07-15 22:48:21.627297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.158 [2024-07-15 22:48:21.627327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.158 [2024-07-15 22:48:21.627350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.158 [2024-07-15 22:48:21.627587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.158 [2024-07-15 22:48:21.627828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.158 [2024-07-15 22:48:21.627851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.158 [2024-07-15 22:48:21.627866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.158 [2024-07-15 22:48:21.631428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.158 [2024-07-15 22:48:21.640664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.158 [2024-07-15 22:48:21.641098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.158 [2024-07-15 22:48:21.641129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.158 [2024-07-15 22:48:21.641146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.158 [2024-07-15 22:48:21.641382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.158 [2024-07-15 22:48:21.641623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.158 [2024-07-15 22:48:21.641647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.158 [2024-07-15 22:48:21.641661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.158 [2024-07-15 22:48:21.645222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.158 [2024-07-15 22:48:21.654484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.158 [2024-07-15 22:48:21.654912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.158 [2024-07-15 22:48:21.654943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.158 [2024-07-15 22:48:21.654960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.158 [2024-07-15 22:48:21.655197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.158 [2024-07-15 22:48:21.655437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.158 [2024-07-15 22:48:21.655460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.158 [2024-07-15 22:48:21.655476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.420 [2024-07-15 22:48:21.659051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.420 [2024-07-15 22:48:21.668511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.420 [2024-07-15 22:48:21.668962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-15 22:48:21.668993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.420 [2024-07-15 22:48:21.669011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.420 [2024-07-15 22:48:21.669247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.420 [2024-07-15 22:48:21.669488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.420 [2024-07-15 22:48:21.669517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.420 [2024-07-15 22:48:21.669533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.420 [2024-07-15 22:48:21.673104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.420 [2024-07-15 22:48:21.682347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.420 [2024-07-15 22:48:21.682795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-15 22:48:21.682826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.420 [2024-07-15 22:48:21.682843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.420 [2024-07-15 22:48:21.683089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.420 [2024-07-15 22:48:21.683330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.420 [2024-07-15 22:48:21.683353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.683368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.686932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.696178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.696749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.696812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.696830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.421 [2024-07-15 22:48:21.697075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.421 [2024-07-15 22:48:21.697317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.421 [2024-07-15 22:48:21.697340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.697355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.700920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.710173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.710661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.710692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.710709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.421 [2024-07-15 22:48:21.710954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.421 [2024-07-15 22:48:21.711197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.421 [2024-07-15 22:48:21.711220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.711234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.714785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.724052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.724569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.724623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.724641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.421 [2024-07-15 22:48:21.724887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.421 [2024-07-15 22:48:21.725128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.421 [2024-07-15 22:48:21.725151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.725167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.728732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.737978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.738463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.738509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.738526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.421 [2024-07-15 22:48:21.738762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.421 [2024-07-15 22:48:21.739016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.421 [2024-07-15 22:48:21.739041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.739055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.742607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.751836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.752277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.752308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.752325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.421 [2024-07-15 22:48:21.752561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.421 [2024-07-15 22:48:21.752802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.421 [2024-07-15 22:48:21.752825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.752840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.756402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.765848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.766282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.766313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.766336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.421 [2024-07-15 22:48:21.766573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.421 [2024-07-15 22:48:21.766814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.421 [2024-07-15 22:48:21.766837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.766852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.770413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.779842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.780302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.780333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.780350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.421 [2024-07-15 22:48:21.780586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.421 [2024-07-15 22:48:21.780826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.421 [2024-07-15 22:48:21.780849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.780865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.784423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.793855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.794316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.794347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.794364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.421 [2024-07-15 22:48:21.794601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.421 [2024-07-15 22:48:21.794841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.421 [2024-07-15 22:48:21.794864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.794890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.798443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.807869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.808352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.808382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.808400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.421 [2024-07-15 22:48:21.808637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.421 [2024-07-15 22:48:21.808887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.421 [2024-07-15 22:48:21.808916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.808932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.812485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.821709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.822132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.822163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.822180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.421 [2024-07-15 22:48:21.822416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.421 [2024-07-15 22:48:21.822657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.421 [2024-07-15 22:48:21.822680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.421 [2024-07-15 22:48:21.822695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.421 [2024-07-15 22:48:21.826254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.421 [2024-07-15 22:48:21.835683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.421 [2024-07-15 22:48:21.836120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-15 22:48:21.836151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.421 [2024-07-15 22:48:21.836168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.422 [2024-07-15 22:48:21.836404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.422 [2024-07-15 22:48:21.836644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.422 [2024-07-15 22:48:21.836667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.422 [2024-07-15 22:48:21.836682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.422 [2024-07-15 22:48:21.840242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.422 [2024-07-15 22:48:21.849701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.422 [2024-07-15 22:48:21.850138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-15 22:48:21.850169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.422 [2024-07-15 22:48:21.850186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.422 [2024-07-15 22:48:21.850422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.422 [2024-07-15 22:48:21.850665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.422 [2024-07-15 22:48:21.850689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.422 [2024-07-15 22:48:21.850704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.422 [2024-07-15 22:48:21.854265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.422 [2024-07-15 22:48:21.863700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.422 [2024-07-15 22:48:21.864144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-15 22:48:21.864175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.422 [2024-07-15 22:48:21.864192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.422 [2024-07-15 22:48:21.864428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.422 [2024-07-15 22:48:21.864669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.422 [2024-07-15 22:48:21.864692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.422 [2024-07-15 22:48:21.864707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.422 [2024-07-15 22:48:21.868263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.422 [2024-07-15 22:48:21.877692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.422 [2024-07-15 22:48:21.878152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-15 22:48:21.878183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.422 [2024-07-15 22:48:21.878200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.422 [2024-07-15 22:48:21.878436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.422 [2024-07-15 22:48:21.878678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.422 [2024-07-15 22:48:21.878701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.422 [2024-07-15 22:48:21.878715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.422 [2024-07-15 22:48:21.882272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.422 [2024-07-15 22:48:21.891696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.422 [2024-07-15 22:48:21.892148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-15 22:48:21.892178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.422 [2024-07-15 22:48:21.892195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.422 [2024-07-15 22:48:21.892431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.422 [2024-07-15 22:48:21.892671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.422 [2024-07-15 22:48:21.892695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.422 [2024-07-15 22:48:21.892709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.422 [2024-07-15 22:48:21.896267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.422 [2024-07-15 22:48:21.905695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.422 [2024-07-15 22:48:21.906136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-15 22:48:21.906167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.422 [2024-07-15 22:48:21.906184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.422 [2024-07-15 22:48:21.906425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.422 [2024-07-15 22:48:21.906667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.422 [2024-07-15 22:48:21.906690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.422 [2024-07-15 22:48:21.906705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.422 [2024-07-15 22:48:21.910264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.683 [2024-07-15 22:48:21.919692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.683 [2024-07-15 22:48:21.920109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.683 [2024-07-15 22:48:21.920139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.683 [2024-07-15 22:48:21.920156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.683 [2024-07-15 22:48:21.920392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.683 [2024-07-15 22:48:21.920632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.683 [2024-07-15 22:48:21.920655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.683 [2024-07-15 22:48:21.920670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.683 [2024-07-15 22:48:21.924230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.683 [2024-07-15 22:48:21.933663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.683 [2024-07-15 22:48:21.934095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.683 [2024-07-15 22:48:21.934126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.683 [2024-07-15 22:48:21.934143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.683 [2024-07-15 22:48:21.934380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.683 [2024-07-15 22:48:21.934620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.683 [2024-07-15 22:48:21.934643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.683 [2024-07-15 22:48:21.934657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.683 [2024-07-15 22:48:21.938217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.683 [2024-07-15 22:48:21.947652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.683 [2024-07-15 22:48:21.948082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.683 [2024-07-15 22:48:21.948113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.683 [2024-07-15 22:48:21.948131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.683 [2024-07-15 22:48:21.948367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.683 [2024-07-15 22:48:21.948608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.683 [2024-07-15 22:48:21.948631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.683 [2024-07-15 22:48:21.948652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.683 [2024-07-15 22:48:21.952210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.683 [2024-07-15 22:48:21.961644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.683 [2024-07-15 22:48:21.962088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.683 [2024-07-15 22:48:21.962120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.683 [2024-07-15 22:48:21.962137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.683 [2024-07-15 22:48:21.962373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.683 [2024-07-15 22:48:21.962614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.683 [2024-07-15 22:48:21.962637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.683 [2024-07-15 22:48:21.962652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.683 [2024-07-15 22:48:21.966217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.683 [2024-07-15 22:48:21.975646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.683 [2024-07-15 22:48:21.976086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.683 [2024-07-15 22:48:21.976117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.683 [2024-07-15 22:48:21.976134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.683 [2024-07-15 22:48:21.976370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.683 [2024-07-15 22:48:21.976611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.683 [2024-07-15 22:48:21.976634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.683 [2024-07-15 22:48:21.976649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.683 [2024-07-15 22:48:21.980205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.683 [2024-07-15 22:48:21.989636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.683 [2024-07-15 22:48:21.990092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.683 [2024-07-15 22:48:21.990123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.683 [2024-07-15 22:48:21.990141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.683 [2024-07-15 22:48:21.990376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.683 [2024-07-15 22:48:21.990617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.683 [2024-07-15 22:48:21.990640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.683 [2024-07-15 22:48:21.990654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:21.994213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.003642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.004082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.004118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.004137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.004372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.004614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.004637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.684 [2024-07-15 22:48:22.004652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:22.008211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.017662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.018145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.018175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.018193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.018429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.018670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.018693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.684 [2024-07-15 22:48:22.018708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:22.022268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.031497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.031954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.031986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.032003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.032240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.032481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.032505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.684 [2024-07-15 22:48:22.032520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:22.036129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.045398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.045805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.045836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.045853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.046098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.046345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.046369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.684 [2024-07-15 22:48:22.046384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:22.049952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.059398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.059837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.059868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.059895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.060132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.060374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.060397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.684 [2024-07-15 22:48:22.060411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:22.063977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.073419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.073870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.073907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.073925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.074161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.074402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.074426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.684 [2024-07-15 22:48:22.074441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:22.078005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.087242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.087685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.087715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.087732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.087979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.088227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.088251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.684 [2024-07-15 22:48:22.088266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:22.091824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.101232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.101698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.101729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.101747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.101993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.102236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.102259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.684 [2024-07-15 22:48:22.102274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:22.105823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.115072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.115535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.115565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.115583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.115819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.116069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.116093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.684 [2024-07-15 22:48:22.116108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:22.119660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.128912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.129367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.129398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.129415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.129651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.129901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.129931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.684 [2024-07-15 22:48:22.129945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.684 [2024-07-15 22:48:22.133496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.684 [2024-07-15 22:48:22.142736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.684 [2024-07-15 22:48:22.143234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.684 [2024-07-15 22:48:22.143266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.684 [2024-07-15 22:48:22.143288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.684 [2024-07-15 22:48:22.143526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.684 [2024-07-15 22:48:22.143767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.684 [2024-07-15 22:48:22.143790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.685 [2024-07-15 22:48:22.143805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.685 [2024-07-15 22:48:22.147367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.685 [2024-07-15 22:48:22.156603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.685 [2024-07-15 22:48:22.157079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.685 [2024-07-15 22:48:22.157110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.685 [2024-07-15 22:48:22.157128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.685 [2024-07-15 22:48:22.157375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.685 [2024-07-15 22:48:22.157615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.685 [2024-07-15 22:48:22.157638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.685 [2024-07-15 22:48:22.157653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.685 [2024-07-15 22:48:22.161212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.685 [2024-07-15 22:48:22.170447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.685 [2024-07-15 22:48:22.170897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.685 [2024-07-15 22:48:22.170928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.685 [2024-07-15 22:48:22.170946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.685 [2024-07-15 22:48:22.171183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.685 [2024-07-15 22:48:22.171423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.685 [2024-07-15 22:48:22.171446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.685 [2024-07-15 22:48:22.171461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.685 [2024-07-15 22:48:22.175023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.946 [2024-07-15 22:48:22.184471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.946 [2024-07-15 22:48:22.184940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.946 [2024-07-15 22:48:22.184971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.946 [2024-07-15 22:48:22.184988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.946 [2024-07-15 22:48:22.185225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.946 [2024-07-15 22:48:22.185466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.946 [2024-07-15 22:48:22.185495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.946 [2024-07-15 22:48:22.185510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.946 [2024-07-15 22:48:22.189071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.946 [2024-07-15 22:48:22.198308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.946 [2024-07-15 22:48:22.198738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.946 [2024-07-15 22:48:22.198768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.946 [2024-07-15 22:48:22.198785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.946 [2024-07-15 22:48:22.199030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.946 [2024-07-15 22:48:22.199272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.946 [2024-07-15 22:48:22.199295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.946 [2024-07-15 22:48:22.199310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.946 [2024-07-15 22:48:22.202859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.946 [2024-07-15 22:48:22.212298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.946 [2024-07-15 22:48:22.212725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.946 [2024-07-15 22:48:22.212755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.946 [2024-07-15 22:48:22.212773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.946 [2024-07-15 22:48:22.213018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.946 [2024-07-15 22:48:22.213259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.946 [2024-07-15 22:48:22.213283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.946 [2024-07-15 22:48:22.213297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.946 [2024-07-15 22:48:22.216844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.946 [2024-07-15 22:48:22.226317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.946 [2024-07-15 22:48:22.226763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.946 [2024-07-15 22:48:22.226794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.946 [2024-07-15 22:48:22.226811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.946 [2024-07-15 22:48:22.227060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.946 [2024-07-15 22:48:22.227302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.227325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.227340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.230903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.240141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.240574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.240605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.240622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.240859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.241108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.241131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.241147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.244696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.254139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.254563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.254593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.254610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.254847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.255096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.255120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.255135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.258684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.268125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.268577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.268608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.268625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.268861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.269111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.269135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.269150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.272701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.281946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.282370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.282401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.282418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.282660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.282910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.282934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.282950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.286504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.295767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.296232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.296262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.296280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.296516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.296757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.296781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.296796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.300356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.309611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.310047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.310079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.310096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.310333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.310575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.310599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.310614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.314178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.323626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.324067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.324099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.324116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.324353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.324594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.324617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.324638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.328200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.337635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.338091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.338122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.338140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.338376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.338616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.338640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.338655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.342213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.351644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.352083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.352113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.352130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.352366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.352607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.352629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.352644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.356231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.365467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.365927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.365959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.365976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.366212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.366454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.947 [2024-07-15 22:48:22.366477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.947 [2024-07-15 22:48:22.366492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.947 [2024-07-15 22:48:22.370055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.947 [2024-07-15 22:48:22.379302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.947 [2024-07-15 22:48:22.379735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.947 [2024-07-15 22:48:22.379765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.947 [2024-07-15 22:48:22.379783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.947 [2024-07-15 22:48:22.380028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.947 [2024-07-15 22:48:22.380270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.948 [2024-07-15 22:48:22.380293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.948 [2024-07-15 22:48:22.380308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.948 [2024-07-15 22:48:22.383863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.948 [2024-07-15 22:48:22.393306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.948 [2024-07-15 22:48:22.393755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.948 [2024-07-15 22:48:22.393785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.948 [2024-07-15 22:48:22.393803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.948 [2024-07-15 22:48:22.394047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.948 [2024-07-15 22:48:22.394289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.948 [2024-07-15 22:48:22.394313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.948 [2024-07-15 22:48:22.394327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.948 [2024-07-15 22:48:22.397884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.948 [2024-07-15 22:48:22.407323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.948 [2024-07-15 22:48:22.407777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.948 [2024-07-15 22:48:22.407807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.948 [2024-07-15 22:48:22.407825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.948 [2024-07-15 22:48:22.408069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.948 [2024-07-15 22:48:22.408311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.948 [2024-07-15 22:48:22.408334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.948 [2024-07-15 22:48:22.408349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.948 [2024-07-15 22:48:22.411907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.948 [2024-07-15 22:48:22.421152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.948 [2024-07-15 22:48:22.421581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.948 [2024-07-15 22:48:22.421612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.948 [2024-07-15 22:48:22.421629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.948 [2024-07-15 22:48:22.421874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.948 [2024-07-15 22:48:22.422126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.948 [2024-07-15 22:48:22.422149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.948 [2024-07-15 22:48:22.422164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.948 [2024-07-15 22:48:22.425714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.948 [2024-07-15 22:48:22.434971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.948 [2024-07-15 22:48:22.435401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.948 [2024-07-15 22:48:22.435432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:38.948 [2024-07-15 22:48:22.435449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:38.948 [2024-07-15 22:48:22.435686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:38.948 [2024-07-15 22:48:22.435936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.948 [2024-07-15 22:48:22.435960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.948 [2024-07-15 22:48:22.435975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.948 [2024-07-15 22:48:22.439526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.209 [2024-07-15 22:48:22.448967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.209 [2024-07-15 22:48:22.449416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.209 [2024-07-15 22:48:22.449446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.209 [2024-07-15 22:48:22.449463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.209 [2024-07-15 22:48:22.449700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.209 [2024-07-15 22:48:22.449952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.209 [2024-07-15 22:48:22.449976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.209 [2024-07-15 22:48:22.449992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.209 [2024-07-15 22:48:22.453542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.209 [2024-07-15 22:48:22.462991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.209 [2024-07-15 22:48:22.463448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.209 [2024-07-15 22:48:22.463478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.209 [2024-07-15 22:48:22.463495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.209 [2024-07-15 22:48:22.463730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.209 [2024-07-15 22:48:22.463981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.209 [2024-07-15 22:48:22.464005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.209 [2024-07-15 22:48:22.464025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.209 [2024-07-15 22:48:22.467584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.209 [2024-07-15 22:48:22.476844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.209 [2024-07-15 22:48:22.477309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.209 [2024-07-15 22:48:22.477340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.209 [2024-07-15 22:48:22.477357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.209 [2024-07-15 22:48:22.477594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.209 [2024-07-15 22:48:22.477835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.209 [2024-07-15 22:48:22.477857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.209 [2024-07-15 22:48:22.477872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.209 [2024-07-15 22:48:22.481443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.209 [2024-07-15 22:48:22.490668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.209 [2024-07-15 22:48:22.491139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.209 [2024-07-15 22:48:22.491169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.209 [2024-07-15 22:48:22.491187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.209 [2024-07-15 22:48:22.491422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.209 [2024-07-15 22:48:22.491663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.209 [2024-07-15 22:48:22.491686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.209 [2024-07-15 22:48:22.491701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.209 [2024-07-15 22:48:22.495262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.209 [2024-07-15 22:48:22.504488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.209 [2024-07-15 22:48:22.504938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.209 [2024-07-15 22:48:22.504970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.209 [2024-07-15 22:48:22.504987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.209 [2024-07-15 22:48:22.505223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.209 [2024-07-15 22:48:22.505464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.209 [2024-07-15 22:48:22.505487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.209 [2024-07-15 22:48:22.505502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.209 [2024-07-15 22:48:22.509063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.209 [2024-07-15 22:48:22.518492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.209 [2024-07-15 22:48:22.518917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.209 [2024-07-15 22:48:22.518955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.518973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.519210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.519451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.519474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.210 [2024-07-15 22:48:22.519489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.210 [2024-07-15 22:48:22.523052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.210 [2024-07-15 22:48:22.532481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.210 [2024-07-15 22:48:22.532927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.210 [2024-07-15 22:48:22.532959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.532977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.533214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.533455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.533478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.210 [2024-07-15 22:48:22.533493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.210 [2024-07-15 22:48:22.537057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.210 [2024-07-15 22:48:22.546489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.210 [2024-07-15 22:48:22.546917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.210 [2024-07-15 22:48:22.546948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.546965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.547202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.547443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.547466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.210 [2024-07-15 22:48:22.547481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.210 [2024-07-15 22:48:22.551041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.210 [2024-07-15 22:48:22.560467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.210 [2024-07-15 22:48:22.560918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.210 [2024-07-15 22:48:22.560949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.560966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.561202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.561449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.561473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.210 [2024-07-15 22:48:22.561488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.210 [2024-07-15 22:48:22.565051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.210 [2024-07-15 22:48:22.574478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.210 [2024-07-15 22:48:22.574913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.210 [2024-07-15 22:48:22.574946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.574963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.575201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.575441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.575464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.210 [2024-07-15 22:48:22.575479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.210 [2024-07-15 22:48:22.579039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.210 [2024-07-15 22:48:22.588473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.210 [2024-07-15 22:48:22.588901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.210 [2024-07-15 22:48:22.588932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.588950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.589186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.589427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.589450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.210 [2024-07-15 22:48:22.589464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.210 [2024-07-15 22:48:22.593026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.210 [2024-07-15 22:48:22.602454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.210 [2024-07-15 22:48:22.602889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.210 [2024-07-15 22:48:22.602920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.602937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.603174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.603414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.603438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.210 [2024-07-15 22:48:22.603452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.210 [2024-07-15 22:48:22.607016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.210 [2024-07-15 22:48:22.616448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.210 [2024-07-15 22:48:22.616906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.210 [2024-07-15 22:48:22.616937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.616954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.617191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.617432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.617455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.210 [2024-07-15 22:48:22.617471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.210 [2024-07-15 22:48:22.621030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.210 [2024-07-15 22:48:22.630457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.210 [2024-07-15 22:48:22.630889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.210 [2024-07-15 22:48:22.630920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.630938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.631174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.631416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.631439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.210 [2024-07-15 22:48:22.631454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.210 [2024-07-15 22:48:22.635016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.210 [2024-07-15 22:48:22.644459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.210 [2024-07-15 22:48:22.644914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.210 [2024-07-15 22:48:22.644945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.644962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.645198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.645438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.645462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.210 [2024-07-15 22:48:22.645476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.210 [2024-07-15 22:48:22.649040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.210 [2024-07-15 22:48:22.658474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.210 [2024-07-15 22:48:22.658927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.210 [2024-07-15 22:48:22.658958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.210 [2024-07-15 22:48:22.658981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.210 [2024-07-15 22:48:22.659219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.210 [2024-07-15 22:48:22.659459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.210 [2024-07-15 22:48:22.659483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.211 [2024-07-15 22:48:22.659498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.211 [2024-07-15 22:48:22.663064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.211 [2024-07-15 22:48:22.672285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.211 [2024-07-15 22:48:22.672748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.211 [2024-07-15 22:48:22.672779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.211 [2024-07-15 22:48:22.672796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.211 [2024-07-15 22:48:22.673043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.211 [2024-07-15 22:48:22.673284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.211 [2024-07-15 22:48:22.673308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.211 [2024-07-15 22:48:22.673323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.211 [2024-07-15 22:48:22.676872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.211 [2024-07-15 22:48:22.686099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.211 [2024-07-15 22:48:22.686527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.211 [2024-07-15 22:48:22.686558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.211 [2024-07-15 22:48:22.686575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.211 [2024-07-15 22:48:22.686811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.211 [2024-07-15 22:48:22.687063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.211 [2024-07-15 22:48:22.687087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.211 [2024-07-15 22:48:22.687102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.211 [2024-07-15 22:48:22.690652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.211 [2024-07-15 22:48:22.700096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.211 [2024-07-15 22:48:22.700524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.211 [2024-07-15 22:48:22.700555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.211 [2024-07-15 22:48:22.700572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.211 [2024-07-15 22:48:22.700808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.211 [2024-07-15 22:48:22.701058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.211 [2024-07-15 22:48:22.701087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.211 [2024-07-15 22:48:22.701104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.211 [2024-07-15 22:48:22.704655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.472 [2024-07-15 22:48:22.714099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.472 [2024-07-15 22:48:22.714538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.472 [2024-07-15 22:48:22.714568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.472 [2024-07-15 22:48:22.714585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.472 [2024-07-15 22:48:22.714821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.472 [2024-07-15 22:48:22.715070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.472 [2024-07-15 22:48:22.715094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.472 [2024-07-15 22:48:22.715110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.472 [2024-07-15 22:48:22.718661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.472 [2024-07-15 22:48:22.728095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.472 [2024-07-15 22:48:22.728553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.472 [2024-07-15 22:48:22.728584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.472 [2024-07-15 22:48:22.728601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.472 [2024-07-15 22:48:22.728837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.472 [2024-07-15 22:48:22.729087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.472 [2024-07-15 22:48:22.729111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.472 [2024-07-15 22:48:22.729126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.472 [2024-07-15 22:48:22.732677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.472 [2024-07-15 22:48:22.742115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.472 [2024-07-15 22:48:22.742578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.472 [2024-07-15 22:48:22.742608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.472 [2024-07-15 22:48:22.742625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.472 [2024-07-15 22:48:22.742861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.472 [2024-07-15 22:48:22.743114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.472 [2024-07-15 22:48:22.743138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.472 [2024-07-15 22:48:22.743153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.472 [2024-07-15 22:48:22.746709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.472 [2024-07-15 22:48:22.755950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.472 [2024-07-15 22:48:22.756403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.472 [2024-07-15 22:48:22.756433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.472 [2024-07-15 22:48:22.756450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.472 [2024-07-15 22:48:22.756686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.472 [2024-07-15 22:48:22.756937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.472 [2024-07-15 22:48:22.756961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.472 [2024-07-15 22:48:22.756976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.472 [2024-07-15 22:48:22.760526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.472 [2024-07-15 22:48:22.769968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.472 [2024-07-15 22:48:22.770500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.472 [2024-07-15 22:48:22.770530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.472 [2024-07-15 22:48:22.770547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.472 [2024-07-15 22:48:22.770783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.472 [2024-07-15 22:48:22.771033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.472 [2024-07-15 22:48:22.771057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.472 [2024-07-15 22:48:22.771072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.472 [2024-07-15 22:48:22.774625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.472 [2024-07-15 22:48:22.783851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.472 [2024-07-15 22:48:22.784317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.472 [2024-07-15 22:48:22.784348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.784365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.784601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.784842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.784865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.784890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.788446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.797675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.798137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.798168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.798185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.798428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.798670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.798692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.798707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.802268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.811494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.811926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.811957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.811974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.812210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.812451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.812474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.812489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.816055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.825486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.825942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.825973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.825990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.826226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.826467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.826490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.826505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.830069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.839507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.839956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.839988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.840005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.840242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.840483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.840506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.840526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.844092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.853329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.853787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.853817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.853834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.854081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.854322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.854346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.854361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.857922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.867172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.867628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.867659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.867676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.867926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.868168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.868191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.868206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.871760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.880997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.881424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.881455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.881472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.881707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.881960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.881984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.881999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.885551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.894992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.895421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.895452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.895469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.895705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.895957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.895981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.895996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.899548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.908991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.909421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.909452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.909469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.909706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.909959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.909983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.909998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.913547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.922990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.923451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.923481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.473 [2024-07-15 22:48:22.923498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.473 [2024-07-15 22:48:22.923734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.473 [2024-07-15 22:48:22.923987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.473 [2024-07-15 22:48:22.924012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.473 [2024-07-15 22:48:22.924027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.473 [2024-07-15 22:48:22.927576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.473 [2024-07-15 22:48:22.936799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.473 [2024-07-15 22:48:22.937233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.473 [2024-07-15 22:48:22.937264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.474 [2024-07-15 22:48:22.937281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.474 [2024-07-15 22:48:22.937517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.474 [2024-07-15 22:48:22.937768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.474 [2024-07-15 22:48:22.937791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.474 [2024-07-15 22:48:22.937806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.474 [2024-07-15 22:48:22.941371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.474 [2024-07-15 22:48:22.950823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.474 [2024-07-15 22:48:22.951295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.474 [2024-07-15 22:48:22.951326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.474 [2024-07-15 22:48:22.951343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.474 [2024-07-15 22:48:22.951579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.474 [2024-07-15 22:48:22.951820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.474 [2024-07-15 22:48:22.951844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.474 [2024-07-15 22:48:22.951860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.474 [2024-07-15 22:48:22.955422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.474 [2024-07-15 22:48:22.964657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.474 [2024-07-15 22:48:22.965115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.474 [2024-07-15 22:48:22.965146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.474 [2024-07-15 22:48:22.965163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.474 [2024-07-15 22:48:22.965399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.474 [2024-07-15 22:48:22.965639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.474 [2024-07-15 22:48:22.965663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.474 [2024-07-15 22:48:22.965678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.474 [2024-07-15 22:48:22.969248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.734 [2024-07-15 22:48:22.978492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.734 [2024-07-15 22:48:22.978949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.734 [2024-07-15 22:48:22.978980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.734 [2024-07-15 22:48:22.978998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.734 [2024-07-15 22:48:22.979235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.734 [2024-07-15 22:48:22.979477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.734 [2024-07-15 22:48:22.979499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.734 [2024-07-15 22:48:22.979514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.734 [2024-07-15 22:48:22.983087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.734 [2024-07-15 22:48:22.992317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.734 [2024-07-15 22:48:22.992768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.734 [2024-07-15 22:48:22.992799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.734 [2024-07-15 22:48:22.992816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.734 [2024-07-15 22:48:22.993064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.734 [2024-07-15 22:48:22.993305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.734 [2024-07-15 22:48:22.993328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.734 [2024-07-15 22:48:22.993343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.734 [2024-07-15 22:48:22.996903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.734 [2024-07-15 22:48:23.006132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.734 [2024-07-15 22:48:23.006634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.734 [2024-07-15 22:48:23.006665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.734 [2024-07-15 22:48:23.006682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.734 [2024-07-15 22:48:23.006930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.734 [2024-07-15 22:48:23.007171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.734 [2024-07-15 22:48:23.007194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.734 [2024-07-15 22:48:23.007209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.734 [2024-07-15 22:48:23.010760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.734 [2024-07-15 22:48:23.019996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.734 [2024-07-15 22:48:23.020417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.734 [2024-07-15 22:48:23.020447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.734 [2024-07-15 22:48:23.020464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.020701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.020952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.020976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.020991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.024540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.033986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.034507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.735 [2024-07-15 22:48:23.034543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.735 [2024-07-15 22:48:23.034561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.034797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.035050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.035074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.035089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.038640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.047862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.048296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.735 [2024-07-15 22:48:23.048327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.735 [2024-07-15 22:48:23.048344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.048581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.048822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.048845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.048860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.052421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.061889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.062352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.735 [2024-07-15 22:48:23.062383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.735 [2024-07-15 22:48:23.062401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.062637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.062889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.062913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.062928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.066480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.075702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.076224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.735 [2024-07-15 22:48:23.076254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.735 [2024-07-15 22:48:23.076272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.076507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.076753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.076777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.076792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.080355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.089583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.090010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.735 [2024-07-15 22:48:23.090041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.735 [2024-07-15 22:48:23.090058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.090295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.090536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.090559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.090574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.094136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.103578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.104041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.735 [2024-07-15 22:48:23.104072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.735 [2024-07-15 22:48:23.104090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.104326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.104567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.104590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.104605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.108158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.117598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.118054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.735 [2024-07-15 22:48:23.118085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.735 [2024-07-15 22:48:23.118102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.118338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.118579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.118602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.118617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.122180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.131416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.131861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.735 [2024-07-15 22:48:23.131919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.735 [2024-07-15 22:48:23.131937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.132174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.132414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.132437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.132452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.136009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.145231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.145681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.735 [2024-07-15 22:48:23.145711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.735 [2024-07-15 22:48:23.145728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.145977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.146218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.146241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.146256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.149806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.159249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.159652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.735 [2024-07-15 22:48:23.159683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.735 [2024-07-15 22:48:23.159701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.735 [2024-07-15 22:48:23.159949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.735 [2024-07-15 22:48:23.160192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.735 [2024-07-15 22:48:23.160215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.735 [2024-07-15 22:48:23.160229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.735 [2024-07-15 22:48:23.163791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.735 [2024-07-15 22:48:23.173256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.735 [2024-07-15 22:48:23.173694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.736 [2024-07-15 22:48:23.173724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.736 [2024-07-15 22:48:23.173747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.736 [2024-07-15 22:48:23.173993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.736 [2024-07-15 22:48:23.174234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.736 [2024-07-15 22:48:23.174258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.736 [2024-07-15 22:48:23.174272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.736 [2024-07-15 22:48:23.177837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.736 [2024-07-15 22:48:23.187095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.736 [2024-07-15 22:48:23.187544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.736 [2024-07-15 22:48:23.187575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.736 [2024-07-15 22:48:23.187592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.736 [2024-07-15 22:48:23.187828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.736 [2024-07-15 22:48:23.188081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.736 [2024-07-15 22:48:23.188105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.736 [2024-07-15 22:48:23.188120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.736 [2024-07-15 22:48:23.191671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.736 [2024-07-15 22:48:23.200952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.736 [2024-07-15 22:48:23.201412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.736 [2024-07-15 22:48:23.201461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.736 [2024-07-15 22:48:23.201478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.736 [2024-07-15 22:48:23.201714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.736 [2024-07-15 22:48:23.201967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.736 [2024-07-15 22:48:23.201991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.736 [2024-07-15 22:48:23.202006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.736 [2024-07-15 22:48:23.205564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.736 [2024-07-15 22:48:23.214809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.736 [2024-07-15 22:48:23.215430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.736 [2024-07-15 22:48:23.215480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.736 [2024-07-15 22:48:23.215497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.736 [2024-07-15 22:48:23.215733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.736 [2024-07-15 22:48:23.215988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.736 [2024-07-15 22:48:23.216018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.736 [2024-07-15 22:48:23.216033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.736 [2024-07-15 22:48:23.219596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.736 [2024-07-15 22:48:23.228637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.736 [2024-07-15 22:48:23.229080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.736 [2024-07-15 22:48:23.229111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.736 [2024-07-15 22:48:23.229129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.736 [2024-07-15 22:48:23.229366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.736 [2024-07-15 22:48:23.229606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.736 [2024-07-15 22:48:23.229629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.736 [2024-07-15 22:48:23.229644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.736 [2024-07-15 22:48:23.233210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.994 [2024-07-15 22:48:23.242460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.994 [2024-07-15 22:48:23.242911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.994 [2024-07-15 22:48:23.242943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.994 [2024-07-15 22:48:23.242960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.994 [2024-07-15 22:48:23.243198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.994 [2024-07-15 22:48:23.243439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.994 [2024-07-15 22:48:23.243462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.994 [2024-07-15 22:48:23.243478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.994 [2024-07-15 22:48:23.247046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.994 [2024-07-15 22:48:23.256316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.994 [2024-07-15 22:48:23.256945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.994 [2024-07-15 22:48:23.256986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.994 [2024-07-15 22:48:23.257004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.994 [2024-07-15 22:48:23.257241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.994 [2024-07-15 22:48:23.257483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.994 [2024-07-15 22:48:23.257509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.994 [2024-07-15 22:48:23.257524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.994 [2024-07-15 22:48:23.261097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.994 [2024-07-15 22:48:23.270150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.994 [2024-07-15 22:48:23.270702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.994 [2024-07-15 22:48:23.270753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.994 [2024-07-15 22:48:23.270771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.994 [2024-07-15 22:48:23.271019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.994 [2024-07-15 22:48:23.271261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.994 [2024-07-15 22:48:23.271285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.994 [2024-07-15 22:48:23.271300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.994 [2024-07-15 22:48:23.274860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.994 [2024-07-15 22:48:23.284115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.994 [2024-07-15 22:48:23.284599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.994 [2024-07-15 22:48:23.284631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.994 [2024-07-15 22:48:23.284648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.994 [2024-07-15 22:48:23.284898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.994 [2024-07-15 22:48:23.285141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.994 [2024-07-15 22:48:23.285164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.994 [2024-07-15 22:48:23.285179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.994 [2024-07-15 22:48:23.288739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.994 [2024-07-15 22:48:23.297991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.994 [2024-07-15 22:48:23.298514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.994 [2024-07-15 22:48:23.298545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.298562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.298799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.299050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.299074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.299089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 [2024-07-15 22:48:23.302647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 [2024-07-15 22:48:23.311898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.312400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.312449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.312467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.312708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.312963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.312988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.313003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 [2024-07-15 22:48:23.316574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 [2024-07-15 22:48:23.325816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.326299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.326330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.326348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.326584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.326825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.326848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.326863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 [2024-07-15 22:48:23.330425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 [2024-07-15 22:48:23.339674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.340136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.340167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.340184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.340420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.340661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.340684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.340699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 [2024-07-15 22:48:23.344272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 [2024-07-15 22:48:23.353512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.353954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.353985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.354003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.354239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.354481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.354504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.354524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 [2024-07-15 22:48:23.358094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 [2024-07-15 22:48:23.367348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.367789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.367820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.367837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.368083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.368325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.368348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.368363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 [2024-07-15 22:48:23.371927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 [2024-07-15 22:48:23.381376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.381827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.381858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.381884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.382124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.382365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.382388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.382403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 [2024-07-15 22:48:23.385967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 [2024-07-15 22:48:23.395218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.395640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.395671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.395689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.395936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.396177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.396200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.396215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 [2024-07-15 22:48:23.399771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 [2024-07-15 22:48:23.409220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.409674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.409704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.409721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.409969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.410210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.410233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.410248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1360152 Killed "${NVMF_APP[@]}" "$@" 00:24:39.995 [2024-07-15 22:48:23.413801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1361224 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1361224 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@823 -- # '[' -z 1361224 ']' 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:39.995 22:48:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:39.995 [2024-07-15 22:48:23.423047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.423494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.423525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.423542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.423778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.424030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.424054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.424069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 [2024-07-15 22:48:23.427623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 [2024-07-15 22:48:23.436861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.437292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.437327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.437346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.437582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.995 [2024-07-15 22:48:23.437824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.995 [2024-07-15 22:48:23.437847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.995 [2024-07-15 22:48:23.437862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.995 [2024-07-15 22:48:23.441430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.995 [2024-07-15 22:48:23.450894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.995 [2024-07-15 22:48:23.451305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.995 [2024-07-15 22:48:23.451336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.995 [2024-07-15 22:48:23.451353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.995 [2024-07-15 22:48:23.451609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.996 [2024-07-15 22:48:23.451851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.996 [2024-07-15 22:48:23.451885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.996 [2024-07-15 22:48:23.451904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.996 [2024-07-15 22:48:23.455459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.996 [2024-07-15 22:48:23.464918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.996 [2024-07-15 22:48:23.465394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.996 [2024-07-15 22:48:23.465425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.996 [2024-07-15 22:48:23.465443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.996 [2024-07-15 22:48:23.465679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.996 [2024-07-15 22:48:23.465932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.996 [2024-07-15 22:48:23.465957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.996 [2024-07-15 22:48:23.465972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.996 [2024-07-15 22:48:23.468209] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:39.996 [2024-07-15 22:48:23.468276] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.996 [2024-07-15 22:48:23.469528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.996 [2024-07-15 22:48:23.478763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.996 [2024-07-15 22:48:23.479223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.996 [2024-07-15 22:48:23.479254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.996 [2024-07-15 22:48:23.479278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.996 [2024-07-15 22:48:23.479516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.996 [2024-07-15 22:48:23.479757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.996 [2024-07-15 22:48:23.479780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.996 [2024-07-15 22:48:23.479795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.996 [2024-07-15 22:48:23.483364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.996 [2024-07-15 22:48:23.492770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.996 [2024-07-15 22:48:23.493196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.996 [2024-07-15 22:48:23.493228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:39.996 [2024-07-15 22:48:23.493246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:39.996 [2024-07-15 22:48:23.493482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:39.996 [2024-07-15 22:48:23.493724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.996 [2024-07-15 22:48:23.493747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.996 [2024-07-15 22:48:23.493762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.254 [2024-07-15 22:48:23.497327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.254 [2024-07-15 22:48:23.506772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.254 [2024-07-15 22:48:23.507206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.254 [2024-07-15 22:48:23.507237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.254 [2024-07-15 22:48:23.507255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.254 [2024-07-15 22:48:23.507492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.254 [2024-07-15 22:48:23.507732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.254 [2024-07-15 22:48:23.507756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.254 [2024-07-15 22:48:23.507771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.254 [2024-07-15 22:48:23.511343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.254 [2024-07-15 22:48:23.520590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.254 [2024-07-15 22:48:23.521018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.254 [2024-07-15 22:48:23.521050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.254 [2024-07-15 22:48:23.521068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.254 [2024-07-15 22:48:23.521305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.254 [2024-07-15 22:48:23.521546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.254 [2024-07-15 22:48:23.521574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.254 [2024-07-15 22:48:23.521590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.254 [2024-07-15 22:48:23.525156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.254 [2024-07-15 22:48:23.534418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.254 [2024-07-15 22:48:23.534886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.254 [2024-07-15 22:48:23.534918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.254 [2024-07-15 22:48:23.534935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.254 [2024-07-15 22:48:23.535172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.254 [2024-07-15 22:48:23.535413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.254 [2024-07-15 22:48:23.535436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.254 [2024-07-15 22:48:23.535451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.254 [2024-07-15 22:48:23.539018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.254 [2024-07-15 22:48:23.544118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:40.254 [2024-07-15 22:48:23.548260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.254 [2024-07-15 22:48:23.548728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.254 [2024-07-15 22:48:23.548760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.254 [2024-07-15 22:48:23.548778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.254 [2024-07-15 22:48:23.549030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.254 [2024-07-15 22:48:23.549274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.254 [2024-07-15 22:48:23.549297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.254 [2024-07-15 22:48:23.549312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.254 [2024-07-15 22:48:23.552894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.254 [2024-07-15 22:48:23.562151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.254 [2024-07-15 22:48:23.562755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.562795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.562816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.255 [2024-07-15 22:48:23.563069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.255 [2024-07-15 22:48:23.563315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.255 [2024-07-15 22:48:23.563339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.255 [2024-07-15 22:48:23.563357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.255 [2024-07-15 22:48:23.566929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.255 [2024-07-15 22:48:23.576156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.255 [2024-07-15 22:48:23.576605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.576636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.576654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.255 [2024-07-15 22:48:23.576902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.255 [2024-07-15 22:48:23.577144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.255 [2024-07-15 22:48:23.577168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.255 [2024-07-15 22:48:23.577183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.255 [2024-07-15 22:48:23.580735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.255 [2024-07-15 22:48:23.589969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.255 [2024-07-15 22:48:23.590450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.590481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.590499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.255 [2024-07-15 22:48:23.590735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.255 [2024-07-15 22:48:23.590988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.255 [2024-07-15 22:48:23.591012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.255 [2024-07-15 22:48:23.591027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.255 [2024-07-15 22:48:23.594578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.255 [2024-07-15 22:48:23.603803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.255 [2024-07-15 22:48:23.604253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.604284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.604302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.255 [2024-07-15 22:48:23.604540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.255 [2024-07-15 22:48:23.604782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.255 [2024-07-15 22:48:23.604806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.255 [2024-07-15 22:48:23.604821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.255 [2024-07-15 22:48:23.608381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.255 [2024-07-15 22:48:23.617837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.255 [2024-07-15 22:48:23.618448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.618489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.618521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.255 [2024-07-15 22:48:23.618768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.255 [2024-07-15 22:48:23.619024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.255 [2024-07-15 22:48:23.619049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.255 [2024-07-15 22:48:23.619067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.255 [2024-07-15 22:48:23.622623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.255 [2024-07-15 22:48:23.631850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.255 [2024-07-15 22:48:23.632305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.632337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.632355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.255 [2024-07-15 22:48:23.632591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.255 [2024-07-15 22:48:23.632833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.255 [2024-07-15 22:48:23.632856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.255 [2024-07-15 22:48:23.632872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.255 [2024-07-15 22:48:23.636432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.255 [2024-07-15 22:48:23.645664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.255 [2024-07-15 22:48:23.646137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.646169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.646186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.255 [2024-07-15 22:48:23.646423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.255 [2024-07-15 22:48:23.646664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.255 [2024-07-15 22:48:23.646688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.255 [2024-07-15 22:48:23.646703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.255 [2024-07-15 22:48:23.650266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.255 [2024-07-15 22:48:23.659494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.255 [2024-07-15 22:48:23.659942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.659973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.659991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.255 [2024-07-15 22:48:23.660228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.255 [2024-07-15 22:48:23.660470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.255 [2024-07-15 22:48:23.660501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.255 [2024-07-15 22:48:23.660517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.255 [2024-07-15 22:48:23.664085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.255 [2024-07-15 22:48:23.665219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.255 [2024-07-15 22:48:23.665254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.255 [2024-07-15 22:48:23.665270] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.255 [2024-07-15 22:48:23.665283] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.255 [2024-07-15 22:48:23.665294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.255 [2024-07-15 22:48:23.665352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.255 [2024-07-15 22:48:23.665411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.255 [2024-07-15 22:48:23.665414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.255 [2024-07-15 22:48:23.673324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.255 [2024-07-15 22:48:23.673892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.673940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.673960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.255 [2024-07-15 22:48:23.674205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.255 [2024-07-15 22:48:23.674452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.255 [2024-07-15 22:48:23.674476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.255 [2024-07-15 22:48:23.674494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.255 [2024-07-15 22:48:23.678057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.255 [2024-07-15 22:48:23.687309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.255 [2024-07-15 22:48:23.687946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.687991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.688014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.255 [2024-07-15 22:48:23.688266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.255 [2024-07-15 22:48:23.688514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.255 [2024-07-15 22:48:23.688538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.255 [2024-07-15 22:48:23.688558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.255 [2024-07-15 22:48:23.692123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.255 [2024-07-15 22:48:23.701380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.255 [2024-07-15 22:48:23.701987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.255 [2024-07-15 22:48:23.702032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.255 [2024-07-15 22:48:23.702069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.256 [2024-07-15 22:48:23.702320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.256 [2024-07-15 22:48:23.702569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.256 [2024-07-15 22:48:23.702593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.256 [2024-07-15 22:48:23.702612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.256 [2024-07-15 22:48:23.706176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.256 [2024-07-15 22:48:23.715427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.256 [2024-07-15 22:48:23.716095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.256 [2024-07-15 22:48:23.716143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.256 [2024-07-15 22:48:23.716167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.256 [2024-07-15 22:48:23.716418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.256 [2024-07-15 22:48:23.716665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.256 [2024-07-15 22:48:23.716689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.256 [2024-07-15 22:48:23.716709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.256 [2024-07-15 22:48:23.720273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.256 [2024-07-15 22:48:23.729309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.256 [2024-07-15 22:48:23.729820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.256 [2024-07-15 22:48:23.729858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.256 [2024-07-15 22:48:23.729886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.256 [2024-07-15 22:48:23.730134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.256 [2024-07-15 22:48:23.730378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.256 [2024-07-15 22:48:23.730403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.256 [2024-07-15 22:48:23.730420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.256 [2024-07-15 22:48:23.733980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.256 [2024-07-15 22:48:23.743238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.256 [2024-07-15 22:48:23.743909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.256 [2024-07-15 22:48:23.743960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.256 [2024-07-15 22:48:23.743984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.256 [2024-07-15 22:48:23.744239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.256 [2024-07-15 22:48:23.744487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.256 [2024-07-15 22:48:23.744526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.256 [2024-07-15 22:48:23.744546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.256 [2024-07-15 22:48:23.748109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.514 [2024-07-15 22:48:23.756776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.514 [2024-07-15 22:48:23.757256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.514 [2024-07-15 22:48:23.757290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.514 [2024-07-15 22:48:23.757309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.514 [2024-07-15 22:48:23.757545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.514 [2024-07-15 22:48:23.757781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.514 [2024-07-15 22:48:23.757803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.514 [2024-07-15 22:48:23.757820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.514 [2024-07-15 22:48:23.761081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.514 [2024-07-15 22:48:23.770436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.514 [2024-07-15 22:48:23.770869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.514 [2024-07-15 22:48:23.770905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.514 [2024-07-15 22:48:23.770922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.514 [2024-07-15 22:48:23.771135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.514 [2024-07-15 22:48:23.771361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.514 [2024-07-15 22:48:23.771382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.514 [2024-07-15 22:48:23.771395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.514 [2024-07-15 22:48:23.774629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.514 [2024-07-15 22:48:23.783855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.514 [2024-07-15 22:48:23.784273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.514 [2024-07-15 22:48:23.784300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.514 [2024-07-15 22:48:23.784317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.514 [2024-07-15 22:48:23.784543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.514 [2024-07-15 22:48:23.784754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.514 [2024-07-15 22:48:23.784774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.514 [2024-07-15 22:48:23.784788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.514 [2024-07-15 22:48:23.787986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.514 [2024-07-15 22:48:23.797306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.514 [2024-07-15 22:48:23.797724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.514 [2024-07-15 22:48:23.797751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.514 [2024-07-15 22:48:23.797767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.514 [2024-07-15 22:48:23.797992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.514 [2024-07-15 22:48:23.798225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.514 [2024-07-15 22:48:23.798245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.514 [2024-07-15 22:48:23.798259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.514 [2024-07-15 22:48:23.801500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.514 [2024-07-15 22:48:23.810791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.514 [2024-07-15 22:48:23.811225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.514 [2024-07-15 22:48:23.811253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.514 [2024-07-15 22:48:23.811269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.514 [2024-07-15 22:48:23.811495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.514 [2024-07-15 22:48:23.811706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.514 [2024-07-15 22:48:23.811726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.514 [2024-07-15 22:48:23.811739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.514 [2024-07-15 22:48:23.814904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.514 [2024-07-15 22:48:23.824401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.514 [2024-07-15 22:48:23.824828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.514 [2024-07-15 22:48:23.824856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.514 [2024-07-15 22:48:23.824871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.514 [2024-07-15 22:48:23.825092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.514 [2024-07-15 22:48:23.825321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.514 [2024-07-15 22:48:23.825341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.514 [2024-07-15 22:48:23.825355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.514 [2024-07-15 22:48:23.828497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.514 [2024-07-15 22:48:23.837830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.514 [2024-07-15 22:48:23.838270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.514 [2024-07-15 22:48:23.838298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.514 [2024-07-15 22:48:23.838314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.514 [2024-07-15 22:48:23.838545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.515 [2024-07-15 22:48:23.838755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.515 [2024-07-15 22:48:23.838775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.515 [2024-07-15 22:48:23.838788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.515 [2024-07-15 22:48:23.841961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.515 [2024-07-15 22:48:23.851316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.515 [2024-07-15 22:48:23.851712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.515 [2024-07-15 22:48:23.851740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.515 [2024-07-15 22:48:23.851756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.515 [2024-07-15 22:48:23.851978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.515 [2024-07-15 22:48:23.852211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.515 [2024-07-15 22:48:23.852232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.515 [2024-07-15 22:48:23.852245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.515 [2024-07-15 22:48:23.855427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.515 [2024-07-15 22:48:23.864757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.515 [2024-07-15 22:48:23.865219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.515 [2024-07-15 22:48:23.865247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.515 [2024-07-15 22:48:23.865262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.515 [2024-07-15 22:48:23.865474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.515 [2024-07-15 22:48:23.865701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.515 [2024-07-15 22:48:23.865721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.515 [2024-07-15 22:48:23.865734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.515 [2024-07-15 22:48:23.868805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.515 [2024-07-15 22:48:23.878185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.515 [2024-07-15 22:48:23.878566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.515 [2024-07-15 22:48:23.878593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.515 [2024-07-15 22:48:23.878608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.515 [2024-07-15 22:48:23.878834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.515 [2024-07-15 22:48:23.879074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.515 [2024-07-15 22:48:23.879096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.515 [2024-07-15 22:48:23.879114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.515 [2024-07-15 22:48:23.882358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.515 [2024-07-15 22:48:23.891750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.515 [2024-07-15 22:48:23.892174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.515 [2024-07-15 22:48:23.892202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.515 [2024-07-15 22:48:23.892218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.515 [2024-07-15 22:48:23.892445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.515 [2024-07-15 22:48:23.892655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.515 [2024-07-15 22:48:23.892675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.515 [2024-07-15 22:48:23.892688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.515 [2024-07-15 22:48:23.895832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.515 [2024-07-15 22:48:23.905319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.515 [2024-07-15 22:48:23.905743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.515 [2024-07-15 22:48:23.905771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.515 [2024-07-15 22:48:23.905786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.515 [2024-07-15 22:48:23.906019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.515 [2024-07-15 22:48:23.906231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.515 [2024-07-15 22:48:23.906251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.515 [2024-07-15 22:48:23.906264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.515 [2024-07-15 22:48:23.909448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.515 [2024-07-15 22:48:23.918771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.515 [2024-07-15 22:48:23.919207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.515 [2024-07-15 22:48:23.919235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.515 [2024-07-15 22:48:23.919251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.515 [2024-07-15 22:48:23.919463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.515 [2024-07-15 22:48:23.919688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.515 [2024-07-15 22:48:23.919709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.515 [2024-07-15 22:48:23.919722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.515 [2024-07-15 22:48:23.922867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.515 [2024-07-15 22:48:23.932323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.515 [2024-07-15 22:48:23.932751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.515 [2024-07-15 22:48:23.932778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.515 [2024-07-15 22:48:23.932794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.515 [2024-07-15 22:48:23.933014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.515 [2024-07-15 22:48:23.933246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.515 [2024-07-15 22:48:23.933266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.515 [2024-07-15 22:48:23.933279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.515 [2024-07-15 22:48:23.936423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.515 [2024-07-15 22:48:23.945814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.515 [2024-07-15 22:48:23.946220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.515 [2024-07-15 22:48:23.946248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.515 [2024-07-15 22:48:23.946264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.515 [2024-07-15 22:48:23.946477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.515 [2024-07-15 22:48:23.946719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.515 [2024-07-15 22:48:23.946740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.515 [2024-07-15 22:48:23.946754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.515 [2024-07-15 22:48:23.950043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.515 [2024-07-15 22:48:23.959442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.515 [2024-07-15 22:48:23.959838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.516 [2024-07-15 22:48:23.959866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.516 [2024-07-15 22:48:23.959889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.516 [2024-07-15 22:48:23.960104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.516 [2024-07-15 22:48:23.960333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.516 [2024-07-15 22:48:23.960353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.516 [2024-07-15 22:48:23.960366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.516 [2024-07-15 22:48:23.963620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.516 [2024-07-15 22:48:23.972984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.516 [2024-07-15 22:48:23.973381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.516 [2024-07-15 22:48:23.973409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.516 [2024-07-15 22:48:23.973425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.516 [2024-07-15 22:48:23.973653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.516 [2024-07-15 22:48:23.973894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.516 [2024-07-15 22:48:23.973916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.516 [2024-07-15 22:48:23.973929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.516 [2024-07-15 22:48:23.977078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.516 [2024-07-15 22:48:23.986457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.516 [2024-07-15 22:48:23.986870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.516 [2024-07-15 22:48:23.986904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.516 [2024-07-15 22:48:23.986920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.516 [2024-07-15 22:48:23.987132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.516 [2024-07-15 22:48:23.987357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.516 [2024-07-15 22:48:23.987377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.516 [2024-07-15 22:48:23.987390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.516 [2024-07-15 22:48:23.990614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.516 [2024-07-15 22:48:23.999928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.516 [2024-07-15 22:48:24.000348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.516 [2024-07-15 22:48:24.000375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.516 [2024-07-15 22:48:24.000391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.516 [2024-07-15 22:48:24.000603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.516 [2024-07-15 22:48:24.000829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.516 [2024-07-15 22:48:24.000850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.516 [2024-07-15 22:48:24.000863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.516 [2024-07-15 22:48:24.004074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.516 [2024-07-15 22:48:24.013544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.516 [2024-07-15 22:48:24.013906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.516 [2024-07-15 22:48:24.013935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.516 [2024-07-15 22:48:24.013951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.775 [2024-07-15 22:48:24.014165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.775 [2024-07-15 22:48:24.014383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.775 [2024-07-15 22:48:24.014404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.775 [2024-07-15 22:48:24.014418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.775 [2024-07-15 22:48:24.017568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.775 [2024-07-15 22:48:24.027077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.775 [2024-07-15 22:48:24.027465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.775 [2024-07-15 22:48:24.027493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.775 [2024-07-15 22:48:24.027509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.775 [2024-07-15 22:48:24.027721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.775 [2024-07-15 22:48:24.027958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.775 [2024-07-15 22:48:24.027979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.775 [2024-07-15 22:48:24.027992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.775 [2024-07-15 22:48:24.031157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.775 [2024-07-15 22:48:24.040542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.775 [2024-07-15 22:48:24.040958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.775 [2024-07-15 22:48:24.040987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.775 [2024-07-15 22:48:24.041003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.775 [2024-07-15 22:48:24.041228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.775 [2024-07-15 22:48:24.041438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.775 [2024-07-15 22:48:24.041458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.775 [2024-07-15 22:48:24.041471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.775 [2024-07-15 22:48:24.044623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.775 [2024-07-15 22:48:24.053933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.775 [2024-07-15 22:48:24.054368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.775 [2024-07-15 22:48:24.054395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.775 [2024-07-15 22:48:24.054411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.775 [2024-07-15 22:48:24.054623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.775 [2024-07-15 22:48:24.054848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.775 [2024-07-15 22:48:24.054869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.775 [2024-07-15 22:48:24.054908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.775 [2024-07-15 22:48:24.058080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.775 [2024-07-15 22:48:24.067416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.775 [2024-07-15 22:48:24.067844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.775 [2024-07-15 22:48:24.067883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.775 [2024-07-15 22:48:24.067902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.775 [2024-07-15 22:48:24.068115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.775 [2024-07-15 22:48:24.068342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.775 [2024-07-15 22:48:24.068363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.775 [2024-07-15 22:48:24.068376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.775 [2024-07-15 22:48:24.071559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.775 [2024-07-15 22:48:24.080852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.775 [2024-07-15 22:48:24.081397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.775 [2024-07-15 22:48:24.081425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.775 [2024-07-15 22:48:24.081441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.775 [2024-07-15 22:48:24.081654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.775 [2024-07-15 22:48:24.081905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.775 [2024-07-15 22:48:24.081927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.775 [2024-07-15 22:48:24.081940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.775 [2024-07-15 22:48:24.085088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.775 [2024-07-15 22:48:24.094498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.775 [2024-07-15 22:48:24.094902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.775 [2024-07-15 22:48:24.094930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.775 [2024-07-15 22:48:24.094946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.775 [2024-07-15 22:48:24.095159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.775 [2024-07-15 22:48:24.095387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.775 [2024-07-15 22:48:24.095407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.775 [2024-07-15 22:48:24.095420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.775 [2024-07-15 22:48:24.098658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.775 [2024-07-15 22:48:24.108004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.775 [2024-07-15 22:48:24.108441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.775 [2024-07-15 22:48:24.108469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.775 [2024-07-15 22:48:24.108484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.775 [2024-07-15 22:48:24.108710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.108953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.108975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.108989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.112163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.121504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.121953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.121981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.121997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.122210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.122436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.122455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.122468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.125653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.134969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.135370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.135398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.135413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.135639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.135849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.135870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.135905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.139141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.148452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.148830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.148858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.148874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.149094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.149323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.149343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.149356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.152503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.161894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.162383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.162411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.162427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.162653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.162892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.162914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.162928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.166151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.175520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.175934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.175962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.175977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.176190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.176416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.176437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.176450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.179682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.189064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.189458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.189486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.189502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.189728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.189966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.189988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.190001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.193159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.202584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.203005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.203033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.203054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.203297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.203515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.203536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.203549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.206912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.215986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.216388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.216416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.216432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.216658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.216894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.216916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.216930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.220079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.229402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.229828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.229855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.229871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.230092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.230320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.230340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.230354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.233538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.242936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.243347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.243375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.243391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.243603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.776 [2024-07-15 22:48:24.243830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.776 [2024-07-15 22:48:24.243854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.776 [2024-07-15 22:48:24.243868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.776 [2024-07-15 22:48:24.247082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.776 [2024-07-15 22:48:24.256392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.776 [2024-07-15 22:48:24.256796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.776 [2024-07-15 22:48:24.256824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.776 [2024-07-15 22:48:24.256839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.776 [2024-07-15 22:48:24.257062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.777 [2024-07-15 22:48:24.257291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.777 [2024-07-15 22:48:24.257312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.777 [2024-07-15 22:48:24.257325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.777 [2024-07-15 22:48:24.260508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.777 [2024-07-15 22:48:24.269802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.777 [2024-07-15 22:48:24.270214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.777 [2024-07-15 22:48:24.270242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:40.777 [2024-07-15 22:48:24.270258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:40.777 [2024-07-15 22:48:24.270498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:40.777 [2024-07-15 22:48:24.270716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.777 [2024-07-15 22:48:24.270737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.777 [2024-07-15 22:48:24.270750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.777 [2024-07-15 22:48:24.274140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.036 [2024-07-15 22:48:24.283399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.036 [2024-07-15 22:48:24.283815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.036 [2024-07-15 22:48:24.283843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.036 [2024-07-15 22:48:24.283858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.036 [2024-07-15 22:48:24.284081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.036 [2024-07-15 22:48:24.284310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.036 [2024-07-15 22:48:24.284331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.036 [2024-07-15 22:48:24.284344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.036 [2024-07-15 22:48:24.287530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.036 [2024-07-15 22:48:24.296882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.036 [2024-07-15 22:48:24.297265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.036 [2024-07-15 22:48:24.297293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.036 [2024-07-15 22:48:24.297308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.036 [2024-07-15 22:48:24.297521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.036 [2024-07-15 22:48:24.297747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.036 [2024-07-15 22:48:24.297768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.036 [2024-07-15 22:48:24.297781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.036 [2024-07-15 22:48:24.300938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.036 [2024-07-15 22:48:24.310433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.036 [2024-07-15 22:48:24.310858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.036 [2024-07-15 22:48:24.310893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.036 [2024-07-15 22:48:24.310910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.036 [2024-07-15 22:48:24.311123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.036 [2024-07-15 22:48:24.311349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.036 [2024-07-15 22:48:24.311370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.036 [2024-07-15 22:48:24.311383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.036 [2024-07-15 22:48:24.314582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.036 [2024-07-15 22:48:24.323951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.036 [2024-07-15 22:48:24.324368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.036 [2024-07-15 22:48:24.324396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.036 [2024-07-15 22:48:24.324417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.036 [2024-07-15 22:48:24.324645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.036 [2024-07-15 22:48:24.324857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.036 [2024-07-15 22:48:24.324893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.036 [2024-07-15 22:48:24.324908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.036 [2024-07-15 22:48:24.328037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.036 [2024-07-15 22:48:24.337539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.036 [2024-07-15 22:48:24.337964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.036 [2024-07-15 22:48:24.337996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.036 [2024-07-15 22:48:24.338012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.036 [2024-07-15 22:48:24.338230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.036 [2024-07-15 22:48:24.338448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.036 [2024-07-15 22:48:24.338470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.036 [2024-07-15 22:48:24.338483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.036 [2024-07-15 22:48:24.341842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.036 [2024-07-15 22:48:24.351042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.036 [2024-07-15 22:48:24.351472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.036 [2024-07-15 22:48:24.351500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.037 [2024-07-15 22:48:24.351516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.037 [2024-07-15 22:48:24.351728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.037 [2024-07-15 22:48:24.351986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.037 [2024-07-15 22:48:24.352008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.037 [2024-07-15 22:48:24.352022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.037 [2024-07-15 22:48:24.355266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.037 [2024-07-15 22:48:24.364546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.037 [2024-07-15 22:48:24.364955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.037 [2024-07-15 22:48:24.364984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.037 [2024-07-15 22:48:24.365001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.037 [2024-07-15 22:48:24.365214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.037 [2024-07-15 22:48:24.365440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.037 [2024-07-15 22:48:24.365461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.037 [2024-07-15 22:48:24.365474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.037 [2024-07-15 22:48:24.368689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.037 [2024-07-15 22:48:24.377987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.037 [2024-07-15 22:48:24.378427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.037 [2024-07-15 22:48:24.378455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.037 [2024-07-15 22:48:24.378472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.037 [2024-07-15 22:48:24.378684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.037 [2024-07-15 22:48:24.378940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.037 [2024-07-15 22:48:24.378962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.037 [2024-07-15 22:48:24.378981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.037 [2024-07-15 22:48:24.382224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.037 [2024-07-15 22:48:24.391444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.037 [2024-07-15 22:48:24.391855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.037 [2024-07-15 22:48:24.391897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.037 [2024-07-15 22:48:24.391913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.037 [2024-07-15 22:48:24.392127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.037 [2024-07-15 22:48:24.392355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.037 [2024-07-15 22:48:24.392376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.037 [2024-07-15 22:48:24.392389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.037 [2024-07-15 22:48:24.395560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.037 [2024-07-15 22:48:24.405005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.037 [2024-07-15 22:48:24.405397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.037 [2024-07-15 22:48:24.405426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.037 [2024-07-15 22:48:24.405441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.037 [2024-07-15 22:48:24.405654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.037 [2024-07-15 22:48:24.405871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.037 [2024-07-15 22:48:24.405901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.037 [2024-07-15 22:48:24.405915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.037 [2024-07-15 22:48:24.409151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.037 [2024-07-15 22:48:24.418569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.037 [2024-07-15 22:48:24.418988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.037 [2024-07-15 22:48:24.419017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.037 [2024-07-15 22:48:24.419032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.037 [2024-07-15 22:48:24.419246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.037 [2024-07-15 22:48:24.419471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.037 [2024-07-15 22:48:24.419491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.037 [2024-07-15 22:48:24.419505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # return 0 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.037 [2024-07-15 22:48:24.422736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.037 [2024-07-15 22:48:24.432061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.037 [2024-07-15 22:48:24.432483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.037 [2024-07-15 22:48:24.432512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.037 [2024-07-15 22:48:24.432527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.037 [2024-07-15 22:48:24.432754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.037 [2024-07-15 22:48:24.432995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.037 [2024-07-15 22:48:24.433017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.037 [2024-07-15 22:48:24.433031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.037 [2024-07-15 22:48:24.436268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.037 [2024-07-15 22:48:24.443760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.037 [2024-07-15 22:48:24.445595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.037 [2024-07-15 22:48:24.445978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.037 [2024-07-15 22:48:24.446006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.037 [2024-07-15 22:48:24.446022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.037 [2024-07-15 22:48:24.446234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.037 [2024-07-15 22:48:24.446461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.037 [2024-07-15 22:48:24.446481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.037 [2024-07-15 22:48:24.446494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.037 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.037 [2024-07-15 22:48:24.449720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.037 [2024-07-15 22:48:24.459103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.037 [2024-07-15 22:48:24.459567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.037 [2024-07-15 22:48:24.459595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.037 [2024-07-15 22:48:24.459611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.037 [2024-07-15 22:48:24.459865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.037 [2024-07-15 22:48:24.460113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.037 [2024-07-15 22:48:24.460135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.037 [2024-07-15 22:48:24.460148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.037 [2024-07-15 22:48:24.463562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.037 [2024-07-15 22:48:24.472620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.037 [2024-07-15 22:48:24.473098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.037 [2024-07-15 22:48:24.473130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.037 [2024-07-15 22:48:24.473147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.037 [2024-07-15 22:48:24.473389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.037 [2024-07-15 22:48:24.473602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.037 [2024-07-15 22:48:24.473622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.037 [2024-07-15 22:48:24.473638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.037 [2024-07-15 22:48:24.476779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.037 Malloc0 00:24:41.038 [2024-07-15 22:48:24.486241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.038 [2024-07-15 22:48:24.486744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:41.038 [2024-07-15 22:48:24.486779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.038 [2024-07-15 22:48:24.486798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.038 [2024-07-15 22:48:24.487031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.038 [2024-07-15 22:48:24.487253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.038 [2024-07-15 22:48:24.487275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.038 [2024-07-15 22:48:24.487292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.038 [2024-07-15 22:48:24.490583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.038 [2024-07-15 22:48:24.500006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.038 [2024-07-15 22:48:24.500429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.038 [2024-07-15 22:48:24.500468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aac0 with addr=10.0.0.2, port=4420 00:24:41.038 [2024-07-15 22:48:24.500485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aac0 is same with the state(5) to be set 00:24:41.038 [2024-07-15 22:48:24.500698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aac0 (9): Bad file descriptor 00:24:41.038 [2024-07-15 22:48:24.500953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.038 [2024-07-15 22:48:24.500975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.038 [2024-07-15 22:48:24.500989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.038 [2024-07-15 22:48:24.504326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.038 [2024-07-15 22:48:24.506016] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.038 22:48:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1360441 00:24:41.038 [2024-07-15 22:48:24.513526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.298 [2024-07-15 22:48:24.584211] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:49.420 00:24:49.420 Latency(us) 00:24:49.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.420 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:49.420 Verification LBA range: start 0x0 length 0x4000 00:24:49.420 Nvme1n1 : 15.01 6455.19 25.22 10420.76 0.00 7560.54 1237.90 22233.69 00:24:49.420 =================================================================================================================== 00:24:49.420 Total : 6455.19 25.22 10420.76 0.00 7560.54 1237.90 22233.69 00:24:49.679 22:48:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:49.679 22:48:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.679 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.679 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.936 rmmod nvme_tcp 00:24:49.936 rmmod nvme_fabrics 00:24:49.936 rmmod nvme_keyring 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1361224 ']' 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1361224 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@942 -- # '[' -z 1361224 ']' 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # kill -0 1361224 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # uname 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1361224 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1361224' 00:24:49.936 killing process with pid 1361224 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@961 -- # kill 1361224 00:24:49.936 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # wait 1361224 00:24:50.194 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.194 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.194 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.194 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.194 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.194 22:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.194 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.194 22:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.098 22:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.098 00:24:52.098 real 0m22.450s 00:24:52.098 user 1m0.973s 00:24:52.098 sys 0m3.989s 00:24:52.098 22:48:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:52.098 22:48:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.098 ************************************ 00:24:52.098 END TEST nvmf_bdevperf 00:24:52.098 ************************************ 00:24:52.357 22:48:35 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:52.357 22:48:35 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:52.357 22:48:35 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:52.357 22:48:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:52.357 22:48:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:52.357 ************************************ 00:24:52.357 START TEST nvmf_target_disconnect 00:24:52.357 ************************************ 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:52.357 * Looking for test storage... 00:24:52.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.357 22:48:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.358 22:48:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:54.264 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:54.264 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:54.264 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:54.264 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.264 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:54.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:24:54.265 00:24:54.265 --- 10.0.0.2 ping statistics --- 00:24:54.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.265 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:24:54.265 00:24:54.265 --- 10.0.0.1 ping statistics --- 00:24:54.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.265 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:54.265 22:48:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:54.523 ************************************ 00:24:54.523 START TEST nvmf_target_disconnect_tc1 00:24:54.523 ************************************ 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc1 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # local es=0 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.523 [2024-07-15 22:48:37.887855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.523 [2024-07-15 22:48:37.887958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18981a0 with addr=10.0.0.2, port=4420 00:24:54.523 [2024-07-15 22:48:37.888016] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:54.523 [2024-07-15 22:48:37.888041] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:54.523 [2024-07-15 22:48:37.888054] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:54.523 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:54.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:54.523 Initializing NVMe Controllers 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # es=1 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:24:54.523 00:24:54.523 real 0m0.094s 00:24:54.523 user 0m0.039s 00:24:54.523 sys 0m0.055s 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:54.523 ************************************ 00:24:54.523 END TEST nvmf_target_disconnect_tc1 00:24:54.523 ************************************ 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:54.523 ************************************ 00:24:54.523 START TEST nvmf_target_disconnect_tc2 00:24:54.523 ************************************ 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc2 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:54.523 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:54.524 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:54.524 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1364275 00:24:54.524 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:54.524 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1364275 00:24:54.524 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1364275 ']' 00:24:54.524 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.524 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:54.524 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.524 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:54.524 22:48:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:54.524 [2024-07-15 22:48:38.001346] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:54.524 [2024-07-15 22:48:38.001435] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.783 [2024-07-15 22:48:38.069631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.783 [2024-07-15 22:48:38.177955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.783 [2024-07-15 22:48:38.178015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.783 [2024-07-15 22:48:38.178033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.783 [2024-07-15 22:48:38.178044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.783 [2024-07-15 22:48:38.178053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.783 [2024-07-15 22:48:38.178150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:54.783 [2024-07-15 22:48:38.178217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:54.783 [2024-07-15 22:48:38.178276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:54.783 [2024-07-15 22:48:38.178279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:55.043 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:55.043 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # return 0 00:24:55.043 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.044 Malloc0 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.044 [2024-07-15 22:48:38.355036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.044 [2024-07-15 22:48:38.383283] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1364418 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.044 22:48:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:56.949 22:48:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1364275 00:24:56.949 22:48:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 [2024-07-15 22:48:40.410104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 [2024-07-15 22:48:40.410408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 [2024-07-15 22:48:40.410717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Read completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 Write completed with error (sct=0, sc=8) 00:24:56.950 starting I/O failed 00:24:56.950 [2024-07-15 22:48:40.411063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:56.950 [2024-07-15 22:48:40.411290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.411323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.411523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.411552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.411706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.411734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.411942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.411970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.412120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.412160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.412337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.412364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.412538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.412567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.412713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.412740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.412914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.412953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.413110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.413138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.413350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.413378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.413570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.413598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.413791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.413822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.414027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.414055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.414264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.414292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.414529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.414557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.414733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.414761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.414915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.414947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.415150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.415193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.415385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.415413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.415585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.415613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.415792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.415819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.416007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.416034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.416237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.416265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.416518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.416547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.416746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.416778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.416995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.417022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.417217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.417245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.417433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.417461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.417639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.417682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.417887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.417934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.418090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.418117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.950 qpair failed and we were unable to recover it. 00:24:56.950 [2024-07-15 22:48:40.418309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.950 [2024-07-15 22:48:40.418336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.418557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.418598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.418804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.418855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.419085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.419112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.419323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.419366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.419609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.419673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.419925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.419953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.420129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.420184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.420415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.420458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.420735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.420762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.420980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.421007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.421176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.421207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.421479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.421534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.421866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.421957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.422136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.422174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.422336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.422362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.422581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.422608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.422777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.422805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.422985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.423012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.423191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.423219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.423384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.423412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.423611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.423638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.423842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.423873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.424083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.424110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.424316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.424343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.424532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.424561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.424782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.424813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.425040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.425067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.425216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.425244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.425413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.425442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.425735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.425786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.426022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.426050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.426208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.426235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.426500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.426528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.426732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.426760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.426940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.426968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.427139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.427184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.427408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.427438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.427625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.427677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.427886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.427914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.428088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.428115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.428290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.428318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.428518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.428551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.428705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.428734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.428921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.428950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.429096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.429123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.429273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.429301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.429525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.429571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.429891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.429944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.430106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.430132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.430333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.430360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.430530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.430557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.430786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.430837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.431020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.431049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.431189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.431232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.431540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.431590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.431860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.431897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.432076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.432103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.432258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.432285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.432444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.432472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.432657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.432684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.432831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.432856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.433044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.433070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.433270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.433296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.433466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.433493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.433768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.433795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.433949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.433976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.434143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.434170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.434346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.434373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.434551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.434587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.434738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.434766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.434914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.434939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.435096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.435123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.435270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.435298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.435470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.435498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.435641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.435667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.435814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.435841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.436059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.436086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.436280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.951 [2024-07-15 22:48:40.436308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.951 qpair failed and we were unable to recover it. 00:24:56.951 [2024-07-15 22:48:40.436487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.436514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.436737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.436768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.436973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.437000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.437200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.437227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.437412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.437439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.437574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.437602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.437777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.437805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.437988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.438015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.438170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.438198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.438406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.438433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.438699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.438750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.438930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.438957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.439157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.439192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.439372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.439399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.439611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.439638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.439834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.439864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.440084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.440125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.440333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.440379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.440620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.440665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.440844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.440872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.441084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.441112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.441292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.441320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.441595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.441645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.441856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.441891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.442069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.442097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.442249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.442277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.442477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.442505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.442712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.442739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.442917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.442945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.443144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.443172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.443356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.443384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.443568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.443595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.443770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.443798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.443995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.444021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.444197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.444222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.444394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.444419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.444595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.444620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.444792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.444817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.444994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.445019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.445229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.445254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.445441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.445481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.445696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.445722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.445908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.445933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.446131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.446174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.446376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.446426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.446691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.446716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.446862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.446895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.447051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.447076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.447256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.447293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:56.952 [2024-07-15 22:48:40.447483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.952 [2024-07-15 22:48:40.447508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:56.952 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.447707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.447735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.447913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.447940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.448100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.448127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.448337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.448382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.448578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.448622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.448823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.448849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.449006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.449033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.449218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.449245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.449452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.449478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.449656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.449682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.449889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.449915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.450061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.450086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.450267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.450294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.450506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.450532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.450694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.450732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.450898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.450925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.451081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.451108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.451284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.451310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.220 [2024-07-15 22:48:40.451485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.220 [2024-07-15 22:48:40.451510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.220 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.451690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.451734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.451910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.451939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.452122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.452149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.452319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.452345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.452568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.452611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.452814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.452839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.453024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.453050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.453230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.453258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.453411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.453438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.453693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.453721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.453870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.453902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.454081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.454109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.454289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.454317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.454492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.454520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.454727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.454755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.454959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.454991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.455185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.455230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.455466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.455494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.455681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.455708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.455861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.455897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.456077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.456105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.456307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.456334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.456512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.456539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.456688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.456716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.456856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.456889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.457083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.457125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.457358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.457387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.457566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.457594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.457790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.457821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.458006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.458035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.458236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.458266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.458452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.458481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.458672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.458703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.458896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.458941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.459082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.459109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.459337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.459367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.459539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.459569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.459773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.459802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.459982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.460009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.460211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.460239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.221 [2024-07-15 22:48:40.460392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.221 [2024-07-15 22:48:40.460419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.221 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.460619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.460647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.460864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.460908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.461079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.461106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.461300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.461330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.461501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.461532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.461730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.461758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.461939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.461967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.462136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.462162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.462374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.462402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.462589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.462618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.462838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.462864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.463027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.463054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.463214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.463240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.463414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.463441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.463652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.463697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.463923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.463951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.464132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.464175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.464378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.464405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.464574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.464601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.464775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.464820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.465013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.465039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.465239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.465269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.465489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.465519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.465697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.465724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.465900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.465928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.466143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.466171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.466330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.466363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.466546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.466576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.466813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.466843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.467080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.467107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.467292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.467322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.467538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.467565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.467753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.467784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.467961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.467989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.468144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.468189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.468394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.468421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.468620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.468649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.468844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.468875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.469094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.469121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.469323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.469352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.222 [2024-07-15 22:48:40.469550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.222 [2024-07-15 22:48:40.469578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.222 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.469722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.469749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.469894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.469925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.470073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.470100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.470278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.470306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.470479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.470508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.470664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.470694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.470891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.470919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.471066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.471093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.471302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.471332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.471552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.471579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.471806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.471836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.472018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.472046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.472211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.472239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.472395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.472425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.472593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.472622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.472844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.472887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.473112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.473139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.473311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.473341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.473598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.473627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.473844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.473874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.474083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.474110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.474288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.474315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.474487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.474514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.474727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.474757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.474965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.474992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.475146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.475180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.475359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.475387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.475659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.475689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.475883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.475929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.476109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.476136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.476290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.476318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.476518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.476549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.476771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.476798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.476969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.476996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.477187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.477217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.477406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.477433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.477575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.477607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.477841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.477871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.478048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.478075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.478255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.478283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.478489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.223 [2024-07-15 22:48:40.478516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.223 qpair failed and we were unable to recover it. 00:24:57.223 [2024-07-15 22:48:40.478656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.478700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.478886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.478914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.479088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.479115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.479353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.479380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.479524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.479551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.479840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.479900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.480120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.480163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.480364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.480392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.480568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.480595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.480772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.480799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.480975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.481004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.481178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.481205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.481386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.481413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.481609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.481639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.481839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.481866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.482053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.482080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.482249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.482277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.482478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.482505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.482701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.482731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.482926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.482954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.483131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.483162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.483359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.483389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.483580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.483607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.483826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.483856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.484062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.484093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.484285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.484313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.484516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.484546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.484743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.484773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.224 [2024-07-15 22:48:40.484996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.224 [2024-07-15 22:48:40.485036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.224 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.485271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.485301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.485520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.485550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.485776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.485803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.485982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.486011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.486237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.486264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.486446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.486475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.486707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.486737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.486943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.486974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.487177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.487205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.487410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.487437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.487636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.487666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.487900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.487928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.488134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.488178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.488353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.488383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.488582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.488609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.488755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.488783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.488944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.488975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.489171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.489199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.489398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.489428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.489619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.489650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.489917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.489945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.490117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.490160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.490351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.490382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.490575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.490603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.490800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.490830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.491062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.491090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.491270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.491297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.491502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.491546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.491744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.491772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.491917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.491944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.492124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.492167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.492358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.492388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.492588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.492615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.492787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.492815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.492982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.493010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.493167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.493196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.225 [2024-07-15 22:48:40.493396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.225 [2024-07-15 22:48:40.493425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.225 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.493655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.493682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.493847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.493885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.494109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.494136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.494332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.494366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.494595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.494625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.494832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.494859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.495023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.495050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.495227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.495254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.495444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.495472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.495642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.495669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.495868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.495903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.496104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.496135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.496355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.496385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.496559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.496586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.496786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.496813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.497031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.497061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.497236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.497263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.497438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.497468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.497658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.497685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.497891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.497918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.498138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.498168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.498364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.498393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.498613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.498640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.498853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.498891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.499087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.499117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.499285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.499313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.499506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.499536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.499755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.499785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.499961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.499989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.500183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.500213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.500429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.500464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.500656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.500683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.500911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.500941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.501133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.501162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.501368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.501395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.501570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.501599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.501780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.501810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.502013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.502040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.502240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.502271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.502461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.502491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.502690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.226 [2024-07-15 22:48:40.502717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.226 qpair failed and we were unable to recover it. 00:24:57.226 [2024-07-15 22:48:40.502925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.502953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.503105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.503132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.503313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.503339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.503515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.503546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.503697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.503727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.503923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.503951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.504105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.504133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.504312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.504339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.504515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.504542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.504733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.504763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.504981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.505012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.505212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.505239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.505435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.505465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.505669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.505696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.505866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.505902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.506127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.506157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.506380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.506410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.506617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.506644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.506822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.506849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.507056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.507086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.507284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.507314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.507512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.507540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.507717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.507746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.507936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.507967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.508134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.508164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.508384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.508411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.508575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.508605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.508799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.508828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.509034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.509065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.509285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.509313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.509508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.509543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.509739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.509768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.509989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.510020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.510220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.510247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.510408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.510439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.510652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.510682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.510882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.510910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.511082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.511109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.511264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.511291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.511493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.511520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.511729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.511756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.227 qpair failed and we were unable to recover it. 00:24:57.227 [2024-07-15 22:48:40.511935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.227 [2024-07-15 22:48:40.511964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.512131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.512161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.512323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.512352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.512661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.512725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.512955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.512983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.513184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.513214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.513382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.513412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.513599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.513629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.513806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.513833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.514017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.514044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.514228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.514258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.514430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.514460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.514681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.514708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.514913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.514944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.515164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.515195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.515390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.515419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.515620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.515651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.515874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.515914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.516073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.516105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.516418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.516487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.516706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.516733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.516960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.516991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.517182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.517213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.517385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.517415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.517648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.517674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.517900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.517927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.518102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.518129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.518432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.518490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.518706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.518731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.518914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.518944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.519113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.519143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.519334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.519360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.519561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.519587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.519786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.519815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.520015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.520042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.520215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.520244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.520432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.520458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.520653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.520682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.520852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.520889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.521124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.521153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.521372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.521398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.521593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.228 [2024-07-15 22:48:40.521622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.228 qpair failed and we were unable to recover it. 00:24:57.228 [2024-07-15 22:48:40.521816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.521846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.229 qpair failed and we were unable to recover it. 00:24:57.229 [2024-07-15 22:48:40.522047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.522077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.229 qpair failed and we were unable to recover it. 00:24:57.229 [2024-07-15 22:48:40.522278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.522305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.229 qpair failed and we were unable to recover it. 00:24:57.229 [2024-07-15 22:48:40.522497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.522526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.229 qpair failed and we were unable to recover it. 00:24:57.229 [2024-07-15 22:48:40.522751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.522777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.229 qpair failed and we were unable to recover it. 00:24:57.229 [2024-07-15 22:48:40.522976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.523006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.229 qpair failed and we were unable to recover it. 00:24:57.229 [2024-07-15 22:48:40.523227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.523254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.229 qpair failed and we were unable to recover it. 00:24:57.229 [2024-07-15 22:48:40.523449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.523478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.229 qpair failed and we were unable to recover it. 00:24:57.229 [2024-07-15 22:48:40.523644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.523673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.229 qpair failed and we were unable to recover it. 00:24:57.229 [2024-07-15 22:48:40.523895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.523925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.229 qpair failed and we were unable to recover it. 00:24:57.229 [2024-07-15 22:48:40.524149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.229 [2024-07-15 22:48:40.524176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.524347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.230 [2024-07-15 22:48:40.524375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.524537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.230 [2024-07-15 22:48:40.524568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.524752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.230 [2024-07-15 22:48:40.524782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.524973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.230 [2024-07-15 22:48:40.525000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.525177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.230 [2024-07-15 22:48:40.525208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.525409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.230 [2024-07-15 22:48:40.525439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.525829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.230 [2024-07-15 22:48:40.525908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.526130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.230 [2024-07-15 22:48:40.526156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.526352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.230 [2024-07-15 22:48:40.526382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.526611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.230 [2024-07-15 22:48:40.526637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.230 qpair failed and we were unable to recover it. 00:24:57.230 [2024-07-15 22:48:40.526816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.526844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.527065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.527092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.527263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.527293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.527488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.527517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.527785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.527814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.528046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.528074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.528246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.528274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.528427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.528457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.528663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.528692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.528926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.528953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.529152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.529181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.529363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.529392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.529627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.529677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.529896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.529923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.530115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.530145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.530336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.530365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.530557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.530586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.530781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.530813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.531045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.531072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.531273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.531300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.531636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.531700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.531908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.531940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.532096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.532122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.532323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.532350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.532695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.532757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.532975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.533000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.533177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.533204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.533398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.533428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.533755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.533803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.534035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.534062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.534239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.534268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.534490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.534520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.534741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.534770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.534958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.534985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.535175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.535205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.535377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.535407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.535642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.535700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.535924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.535952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.536124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.536162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.536352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.536382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.536606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.536659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.231 qpair failed and we were unable to recover it. 00:24:57.231 [2024-07-15 22:48:40.536888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.231 [2024-07-15 22:48:40.536915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.537105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.537134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.537333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.537363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.537714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.537771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.538009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.538036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.538235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.538265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.538439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.538468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.538663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.538693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.538890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.538918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.539076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.539105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.539320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.539350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.539639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.539670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.539902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.539940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.540131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.540163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.540356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.540386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.540568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.540598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.540788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.540818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.541054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.541081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.541285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.541315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.541633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.541698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.541860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.541897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.542099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.542129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.542320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.542350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.542687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.542743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.542976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.543002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.543165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.543195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.543368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.543397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.543618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.543645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.543821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.543848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.544060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.544087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.544285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.544315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.544640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.544697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.544889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.544917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.545139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.545178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.545346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.545376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.545677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.545740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.545974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.546000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.546232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.546262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.546456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.546485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.546730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.546778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.546974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.547001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.232 [2024-07-15 22:48:40.547203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.232 [2024-07-15 22:48:40.547234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.232 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.547453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.547483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.547781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.547832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.548052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.548079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.548287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.548317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.548545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.548572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.548769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.548798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.548990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.549018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.549218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.549248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.549449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.549477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.549699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.549729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.549964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.549991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.550192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.550221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.550439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.550469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.550796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.550850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.551065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.551092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.551319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.551348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.551514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.551544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.551764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.551794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.551992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.552019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.552242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.552271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.552471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.552502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.552904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.552965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.553195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.553222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.553448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.553478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.553673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.553704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.553893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.553924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.554101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.554128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.554335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.554378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.554582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.554609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.554811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.554838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.555079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.555106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.555281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.555308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.555533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.555564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.555756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.555786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.556017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.556044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.556256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.556285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.556453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.556483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.556832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.556903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.557119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.557145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.557345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.557375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.557566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.557596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.233 qpair failed and we were unable to recover it. 00:24:57.233 [2024-07-15 22:48:40.557797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.233 [2024-07-15 22:48:40.557827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.558035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.558062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.558225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.558252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.558413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.558443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.558607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.558636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.558859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.558894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.559099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.559132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.559349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.559379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.559690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.559762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.559981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.560008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.560235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.560265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.560462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.560492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.560822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.560890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.561091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.561118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.561359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.561386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.561588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.561614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.561804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.561833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.562019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.562046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.562288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.562318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.562525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.562563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.562787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.562817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.562994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.563022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.563217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.563247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.563411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.563441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.563802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.563857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.564067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.564094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.564330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.564360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.564584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.564612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.564794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.564821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.565074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.565101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.565308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.565339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.565556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.565586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.565784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.565814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.566016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.566044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.566247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.566277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.566443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.566473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.234 [2024-07-15 22:48:40.566637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.234 [2024-07-15 22:48:40.566668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.234 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.566892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.566932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.567121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.567156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.567324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.567354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.567679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.567743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.567971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.567998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.568223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.568253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.568446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.568476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.568766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.568819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.569044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.569071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.569288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.569318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.569494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.569523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.569726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.569754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.569927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.569954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.570182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.570212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.570389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.570418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.570635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.570664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.570902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.570937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.571161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.571195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.571386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.571416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.571676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.571725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.571949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.571976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.572152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.572179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.572378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.572408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.572706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.572765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.572968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.572996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.573196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.573226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.573407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.573434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.573612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.573639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.573805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.573832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.574057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.574085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.574325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.574355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.574681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.574735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.574955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.574983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.575157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.575187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.575348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.575378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.575664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.575721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.575944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.575971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.576199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.576233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.576452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.235 [2024-07-15 22:48:40.576482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.235 qpair failed and we were unable to recover it. 00:24:57.235 [2024-07-15 22:48:40.576680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.576709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.576916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.576943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.577120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.577149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.577314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.577343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.577512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.577542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.577705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.577732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.577870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.577918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.578085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.578115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.578418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.578473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.578738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.578765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.578973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.579004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.579198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.579228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.579480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.579507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.579730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.579757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.579985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.580013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.580216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.580246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.580518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.580548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.580765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.580792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.580994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.581024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.581222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.581253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.581442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.581472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.581665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.581693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.581888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.581918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.582146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.582173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.582333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.582359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.582560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.582587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.582786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.582813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.583006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.583034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.583243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.583273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.583437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.583463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.583651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.583680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.583872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.583912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.584075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.584105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.584297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.584324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.584515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.584545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.584770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.584797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.584985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.585011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.585234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.585261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.585458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.585488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.585705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.585739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.585893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.585922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.236 [2024-07-15 22:48:40.586124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.236 [2024-07-15 22:48:40.586152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.236 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.586322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.586352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.586506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.586535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.586752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.586781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.586949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.586978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.587170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.587200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.587393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.587422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.587688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.587741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.587940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.587967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.588188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.588217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.588384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.588415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.588643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.588697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.588883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.588911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.589066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.589092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.589276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.589303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.589534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.589563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.589766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.589792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.589935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.589962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.590145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.590171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.590347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.590378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.590577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.590603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.590800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.590841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.591019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.591049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.591216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.591246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.591447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.591474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.591666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.591699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.591897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.591928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.592124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.592153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.592378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.592405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.592648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.592674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.592851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.592884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.593059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.593085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.593243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.593269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.593468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.593495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.593729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.593758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.593967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.593994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.594142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.594170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.594400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.594429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.594650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.594681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.594900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.594930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.595128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.595155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.595329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.237 [2024-07-15 22:48:40.595359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.237 qpair failed and we were unable to recover it. 00:24:57.237 [2024-07-15 22:48:40.595577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.595604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.595801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.595831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.596031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.596058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.596280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.596309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.596533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.596562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.596759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.596788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.596965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.596993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.597135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.597162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.597337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.597363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.597667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.597735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.597960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.597987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.598157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.598188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.598354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.598384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.598552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.598581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.598753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.598779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.598956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.598983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.599160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.599186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.599477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.599527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.599719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.599745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.599903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.599930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.600155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.600185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.600575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.600637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.600820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.600846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.601033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.601060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.601236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.601269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.601491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.601518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.601692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.601719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.601915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.601945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.602163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.602192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.602446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.602496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.602717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.602744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.602942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.602972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.603140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.603169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.603443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.603472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.603701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.603728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.603927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.603957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.604151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.604182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.604501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.604559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.604794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.604834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.605055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.605082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.605285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.605315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.238 [2024-07-15 22:48:40.605502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.238 [2024-07-15 22:48:40.605531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.238 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.605784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.605813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.606058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.606085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.606295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.606324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.606523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.606553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.606745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.606771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.606947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.606977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.607176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.607206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.607528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.607590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.607858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.607891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.608103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.608137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.608355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.608385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.608606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.608635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.608837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.608864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.609125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.609154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.609329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.609371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.609556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.609582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.609758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.609783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.610034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.610061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.610228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.610257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.610426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.610455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.610632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.610659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.610851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.610903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.611088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.611119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.611291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.611321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.611510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.611537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.611729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.611759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.611960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.611990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.612254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.612305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.612534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.612560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.612751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.612779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.612989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.613020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.613238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.613268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.613468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.613495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.613672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.613702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.613902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.239 [2024-07-15 22:48:40.613930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.239 qpair failed and we were unable to recover it. 00:24:57.239 [2024-07-15 22:48:40.614113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.614141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.614367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.614394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.614593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.614622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.614850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.614888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.615065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.615095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.615289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.615315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.615537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.615566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.615736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.615766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.615958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.615989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.616185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.616212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.616443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.616471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.616693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.616723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.616997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.617026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.617252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.617278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.617460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.617489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.617681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.617716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.617891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.617921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.618092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.618119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.618313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.618343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.618544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.618573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.618769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.618798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.619024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.619052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.619255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.619283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.619499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.619528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.619695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.619724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.619919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.619947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.620113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.620142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.620334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.620363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.620566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.620595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.620824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.620851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.621029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.621056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.621260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.621291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.621646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.621712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.621964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.621991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.622183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.622213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.622432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.622461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.622626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.622656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.622827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.622853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.623073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.623101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.623292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.623321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.623526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.623553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.240 qpair failed and we were unable to recover it. 00:24:57.240 [2024-07-15 22:48:40.623755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.240 [2024-07-15 22:48:40.623782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.623991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.624025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.624185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.624214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.624439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.624466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.624644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.624670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.624882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.624920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.625144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.625172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.625552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.625604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.625823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.625849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.626009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.626036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.626191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.626235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.626423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.626452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.626649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.626675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.626903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.626933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.627123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.627153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.627477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.627537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.627757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.627783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.628008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.628038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.628229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.628259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.628608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.628660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.628858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.628890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.629116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.629146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.629369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.629398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.629692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.629740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.629956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.629983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.630186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.630216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.630438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.630464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.630694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.630723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.630915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.630942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.631109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.631140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.631325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.631355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.631711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.631766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.631988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.632015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.632218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.632248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.632470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.632500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.632723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.632753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.632984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.633011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.633236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.633266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.633487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.633517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.633689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.633719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.633925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.633952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.634154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.241 [2024-07-15 22:48:40.634184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.241 qpair failed and we were unable to recover it. 00:24:57.241 [2024-07-15 22:48:40.634374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.634408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.634748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.634813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.635047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.635074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.635256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.635286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.635445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.635475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.635824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.635889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.636097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.636124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.636325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.636363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.636594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.636623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.636849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.636883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.637063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.637089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.637301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.637331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.637537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.637565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.637732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.637759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.637970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.637998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.638141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.638186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.638343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.638373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.638557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.638586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.638759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.638786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.638980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.639009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.639210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.639240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.639401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.639431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.639624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.639652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.639818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.639848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.640068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.640098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.640303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.640333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.640500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.640526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.640720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.640750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.640951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.640981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.641277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.641339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.641571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.641598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.641807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.641839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.642056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.642085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.642392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.642456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.642688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.642716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.642928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.642959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.643155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.643186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.643473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.643528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.643744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.643772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.643965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.643992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.644145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.644172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.242 qpair failed and we were unable to recover it. 00:24:57.242 [2024-07-15 22:48:40.644321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.242 [2024-07-15 22:48:40.644349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.644529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.644556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.644700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.644726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.644896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.644930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.645294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.645355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.645584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.645613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.645782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.645811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.645978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.646009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.646218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.646244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.646448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.646474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.646642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.646672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.646910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.646947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.647333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.647387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.647665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.647691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.647926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.647956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.648184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.648213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.648449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.648476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.648653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.648682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.648921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.648949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.649124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.649151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.649424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.649474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.649720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.649746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.649960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.649991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.650161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.650191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.650386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.650416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.650615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.650642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.650858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.650900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.651120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.651163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.651428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.651454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.651662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.651689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.651852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.651888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.652089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.652144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.652460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.652517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.652766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.652793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.653015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.243 [2024-07-15 22:48:40.653045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.243 qpair failed and we were unable to recover it. 00:24:57.243 [2024-07-15 22:48:40.653249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.653279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.653487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.653513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.653727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.653754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.653980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.654010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.654200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.654230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.654420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.654450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.654677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.654704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.654921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.654951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.655170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.655201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.655527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.655589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.655814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.655841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.656000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.656026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.656219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.656249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.656444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.656474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.656639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.656666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.656859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.656911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.657080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.657109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.657302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.657333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.657556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.657583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.657748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.657778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.657968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.657998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.658182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.658246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.658478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.658505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.658705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.658735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.658902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.658943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.659107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.659136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.659366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.659393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.659590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.659619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.659816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.659846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.660037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.660067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.660262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.660289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.660517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.660547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.660762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.660792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.661020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.661051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.661208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.661235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.661425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.661456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.661672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.661701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.661935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.661963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.662157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.662184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.662382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.662411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.662598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.244 [2024-07-15 22:48:40.662628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.244 qpair failed and we were unable to recover it. 00:24:57.244 [2024-07-15 22:48:40.662850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.662899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.663114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.663141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.663351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.663380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.663607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.663634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.663830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.663860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.664106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.664143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.664350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.664381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.664577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.664604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.664799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.664829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.665012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.665039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.665185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.665212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.665399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.665426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.665730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.665797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.666009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.666036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.666240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.666270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.666428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.666458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.666700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.666751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.666929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.666956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.667151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.667184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.667380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.667414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.667646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.667699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.667922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.667949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.668150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.668180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.668377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.668406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.668624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.668654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.668874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.668909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.669089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.669120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.669284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.669314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.669532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.669560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.669712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.669739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.669922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.669949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.670149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.670179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.670383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.670418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.670647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.670674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.670883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.670920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.671109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.671139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.671395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.671461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.671661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.671688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.671926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.671955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.672117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.672147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.672385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.672442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.245 qpair failed and we were unable to recover it. 00:24:57.245 [2024-07-15 22:48:40.672638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.245 [2024-07-15 22:48:40.672664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.672822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.672848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.673029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.673056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.673195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.673221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.673370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.673396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.673547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.673574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.673746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.673774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.673952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.673979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.674161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.674189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.674330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.674358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.674582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.674611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.674806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.674836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.675015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.675042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.675186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.675213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.675383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.675414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.675608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.675637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.675807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.675844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.676039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.676066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.676263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.676290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.676489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.676523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.676751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.676779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.676933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.676971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.677162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.677192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.677384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.677413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.677754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.677810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.678010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.678037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.678268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.678308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.678535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.678565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.678757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.678786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.678979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.679006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.679214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.679241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.679431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.679461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.679659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.679686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.679848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.679875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.680046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.680073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.680289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.680318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.680476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.680506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.680706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.680743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.681005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.681035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.681254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.681283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.681559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.681605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.681833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.246 [2024-07-15 22:48:40.681860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.246 qpair failed and we were unable to recover it. 00:24:57.246 [2024-07-15 22:48:40.682077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.682107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.682326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.682355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.682572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.682602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.682804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.682831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.683060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.683095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.683336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.683363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.683534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.683578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.683822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.683849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.684067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.684094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.684303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.684333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.684505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.684536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.684740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.684767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.684950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.684981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.685146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.685176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.685343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.685373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.685587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.685614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.685760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.685787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.685970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.685997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.686167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.686207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.686422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.686448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.686701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.686747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.686993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.687023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.687231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.687277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.687452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.687481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.687681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.687716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.687923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.687955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.688155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.688186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.688369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.688399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.688744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.688802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.689032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.689067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.689489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.689550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.689768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.689802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.690020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.690054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.690265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.690296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.690553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.247 [2024-07-15 22:48:40.690605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.247 qpair failed and we were unable to recover it. 00:24:57.247 [2024-07-15 22:48:40.690784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.690812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.691020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.691051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.691263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.691294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.691519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.691549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.691779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.691807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.691980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.692010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.692180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.692212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.692449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.692500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.692698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.692726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.692972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.693003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.693169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.693199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.693372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.693402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.693584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.693612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.693795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.693824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.694051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.694087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.694369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.694421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.694652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.694680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.694889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.694918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.695091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.695119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.695427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.695487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.695718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.695745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.695988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.696016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.696179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.696207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.696466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.696535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.696759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.696787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.696997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.697025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.697195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.697226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.697394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.697424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.697622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.697649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.697947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.697979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.698200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.698246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.698533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.698561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.698739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.698768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.698983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.699011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.699203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.699234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.699485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.699512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.699717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.699749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.700004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.700032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.700211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.700242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.700443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.700485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.700733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.248 [2024-07-15 22:48:40.700761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.248 qpair failed and we were unable to recover it. 00:24:57.248 [2024-07-15 22:48:40.700964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.700997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.701191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.701219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.701552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.701614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.701823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.701851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.702039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.702066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.702253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.702284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.702531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.702582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.702805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.702834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.703024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.703053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.703242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.703274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.703447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.703475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.703655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.703687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.703892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.703922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.704163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.704209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.704576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.704644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.704868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.704902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.705085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.705114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.705314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.705349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.705556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.705586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.705802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.705830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.706020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.706048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.706195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.706234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.706489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.706563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.706792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.706824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.707004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.707033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.707258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.707295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.707704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.707766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.708007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.708035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.708246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.708277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.708477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.708508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.708820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.708883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.709107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.709135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.709314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.709345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.709545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.709575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.709781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.709812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.709995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.710036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.710204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.710236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.710463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.710494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.710708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.710737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.249 [2024-07-15 22:48:40.710946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.249 [2024-07-15 22:48:40.710974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.249 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.711175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.711206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.711398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.711430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.711607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.711638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.711839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.711867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.712025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.712063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.712241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.712270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.712675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.712728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.712936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.712965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.713150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.713181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.713388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.713427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.713757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.713813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.714017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.714044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.714209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.714238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.714449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.714480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.714714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.714742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.714902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.714931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.715132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.715167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.715339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.715379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.715631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.715684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.715882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.715911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.716119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.716150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.716345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.716385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.716647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.716703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.716940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.716969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.717158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.717190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.717397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.717425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.717749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.717796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.717974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.718002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.718226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.718257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.718455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.718493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.718803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.718867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.719089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.719116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.719288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.719330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.719708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.719772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.525 qpair failed and we were unable to recover it. 00:24:57.525 [2024-07-15 22:48:40.719963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-07-15 22:48:40.719993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.720184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.720219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.720455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.720487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.720688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.720727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.720989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.721019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.721196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.721225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.721445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.721497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.721696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.721726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.721982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.722014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.722241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.722268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.722509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.722541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.722734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.722765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.722970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.722998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.723177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.723205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.723417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.723463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.723659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.723691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.723861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.723900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.724101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.724141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.724367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.724399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.724598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.724634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.724844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.724881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.725086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.725113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.725314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.725360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.725600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.725631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.725944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.725975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.726179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.726206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.726374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.726401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.726571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.726600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.726796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.726851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.727080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.727132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.727328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.727357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.727560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.727591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.727954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.727982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.728162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.728189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.728343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.728371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.728600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.728645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.728834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.728862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.729067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.729094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.729323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.729367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.729564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.729593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.729765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-07-15 22:48:40.729792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.526 qpair failed and we were unable to recover it. 00:24:57.526 [2024-07-15 22:48:40.729970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.729999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.730204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.730249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.730426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.730470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.730742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.730792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.730976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.731004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.731200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.731245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.731480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.731524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.731674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.731702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.731889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.731918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.732098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.732125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.732322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.732367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.732601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.732646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.732826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.732854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.733062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.733090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.733305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.733349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.733659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.733725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.733962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.733990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.734227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.734271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.734521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.734566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.734741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.734773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.734933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.734961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.735160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.735205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.735437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.735481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.735725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.735769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.735949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.735978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.736181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.736225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.736399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.736444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.736682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.736732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.736960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.737013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.737221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.737266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.737478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.737523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.737676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.737714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.737904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.737933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.738128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.738179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.738374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.738418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.738625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.738670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.738870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.738905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.739078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.739123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.739355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.739399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.739707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.739771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.739952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-07-15 22:48:40.739979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.527 qpair failed and we were unable to recover it. 00:24:57.527 [2024-07-15 22:48:40.740159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.740205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.740439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.740485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.740823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.740873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.741057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.741086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.741282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.741327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.741528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.741573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.741778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.741805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.741962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.741990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.742182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.742231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.742436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.742480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.742754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.742818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.743047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.743091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.743304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.743348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.743554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.743598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.743802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.743830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.744014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.744042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.744268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.744312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.744513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.744558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.744761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.744789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.744960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.744988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.745187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.745240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.745453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.745497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.745902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.745970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.746155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.746183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.746414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.746458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.746661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.746705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.746920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.746952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.747185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.747230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.747461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.747505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.747682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.747726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.747873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.747908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.748109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.748163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.748367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.748411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.748616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.748661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.748826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.748854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.749050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.749078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.749285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.749331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.749537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.749581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.749729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.749757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.749987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.750032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.750272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-07-15 22:48:40.750317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.528 qpair failed and we were unable to recover it. 00:24:57.528 [2024-07-15 22:48:40.750546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.750591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.750762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.750790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.751020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.751065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.751226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.751271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.751470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.751517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.751700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.751729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.751952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.751998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.752163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.752208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.752398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.752443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.752604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.752631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.752809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.752836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.753050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.753096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.753280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.753325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.753577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.753629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.753832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.753859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.754063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.754107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.754338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.754384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.754746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.754807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.755035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.755061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.755271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.755316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.755517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.755561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.755766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.755793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.755970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.755997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.756158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.756204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.756436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.756480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.756726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.756775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.756971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.757016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.757216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.757260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.757487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.757532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.757711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.757741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.757940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.757970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.758223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.758267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.758503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.758547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.529 qpair failed and we were unable to recover it. 00:24:57.529 [2024-07-15 22:48:40.758722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-07-15 22:48:40.758751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.758949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.758980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.759189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.759234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.759439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.759485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.759690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.759718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.759925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.759951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.760187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.760232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.760469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.760514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.760808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.760869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.761050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.761078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.761318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.761363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.761539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.761585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.761772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.761811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.761966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.761993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.762192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.762236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.762442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.762488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.762687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.762714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.762897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.762935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.763113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.763151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.763325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.763370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.763571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.763615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.763798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.763835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.764028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.764056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.764287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.764332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.764577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.764632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.764776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.764804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.764996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.765023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.765225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.765270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.765415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.765456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.765690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.765735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.765973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.766017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.766231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.766276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.766620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.766684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.766899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.766935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.767149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.767177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.767413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.767456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.767680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.767725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.767903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.767937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.768142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.768186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.768384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.768428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.768630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-07-15 22:48:40.768676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.530 qpair failed and we were unable to recover it. 00:24:57.530 [2024-07-15 22:48:40.768886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.768925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.769126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.769180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.769394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.769422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.769784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.769841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.770032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.770059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.770272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.770317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.770521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.770565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.770745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.770773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.770997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.771041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.771263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.771309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.771510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.771555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.771733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.771767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.771986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.772031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.772226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.772270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.772485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.772529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.772683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.772722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.772928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.772955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.773193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.773238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.773470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.773515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.773832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.773881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.774063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.774091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.774301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.774346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.774549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.774593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.774775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.774811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.774996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.775023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.775253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.775296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.775503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.775548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.775751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.775779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.775982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.776009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.776227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.776273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.776476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.776520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.776723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.776755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.776943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.776988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.777154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.777205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.777407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.777451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.777792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.777843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.778052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.778097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.778333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.778377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.778596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.778625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.778800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.778840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.779087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-07-15 22:48:40.779132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.531 qpair failed and we were unable to recover it. 00:24:57.531 [2024-07-15 22:48:40.779346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.779391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.779590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.779634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.779843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.779871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.780055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.780083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.780287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.780332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.780532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.780577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.780777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.780806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.780980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.781007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.781213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.781257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.781487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.781532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.781831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.781888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.782068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.782096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.782293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.782337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.782561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.782605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.782785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.782812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.782987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.783015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.783215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.783245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.783494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.783538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.783720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.783749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.783951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.783995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.784238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.784283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.784517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.784562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.784709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.784737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.784962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.785013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.785192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.785238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.785415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.785443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.785636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.785684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.785860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.785895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.786100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.786145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.786366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.786394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.786722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.786782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.786966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.786992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.787160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.787203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.787428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.787472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.787782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.787840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.788070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.788116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.788324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.788369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.788537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.788581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.788791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.788819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.788992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.789020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.532 [2024-07-15 22:48:40.789224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-07-15 22:48:40.789269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.532 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.789464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.789495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.789900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.789967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.790143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.790188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.790395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.790440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.790642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.790687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.790867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.790911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.791121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.791149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.791339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.791384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.791573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.791618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.791764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.791793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.792017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.792062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.792284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.792329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.792534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.792578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.792777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.792804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.793006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.793052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.793252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.793297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.793470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.793518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.793694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.793721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.793903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.793930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.794099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.794143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.794323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.794371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.794534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.794579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.794755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.794783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.794986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.795031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.795263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.795307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.795591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.795635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.795810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.795837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.796023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.796051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.796276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.796321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.796602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.796659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.796844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.796872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.797090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.797117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.797345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.797388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.797716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.797767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.533 [2024-07-15 22:48:40.797965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-07-15 22:48:40.797993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.533 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.798191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.798235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.798436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.798481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.798778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.798854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.799048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.799076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.799274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.799318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.799510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.799541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.799755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.799783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.799960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.799988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.800221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.800265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.800493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.800538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.800713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.800752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.800935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.800964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.801141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.801193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.801391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.801436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.801793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.801857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.802096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.802142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.802349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.802394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.802605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.802649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.802830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.802858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.803081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.803109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.803295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.803342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.803578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.803627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.803831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.803859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.804073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.804100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.804330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.804375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.804551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.804596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.804777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.804805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.804983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.805012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.805183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.805229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.805424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.805470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.805642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.805687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.805884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.805912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.806115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.806142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.806344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.806388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.806580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.806625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.806777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.806804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.807009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.807037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.807219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.807264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.807466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.807511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.807711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-07-15 22:48:40.807756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.534 qpair failed and we were unable to recover it. 00:24:57.534 [2024-07-15 22:48:40.807924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.807956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.808202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.808246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.808497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.808550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.808734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.808763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.808967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.809013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.809215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.809261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.809436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.809482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.809659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.809687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.809865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.809899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.810084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.810129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.810335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.810379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.810614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.810659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.810840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.810868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.811068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.811096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.811299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.811344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.811571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.811616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.811825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.811852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.812062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.812107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.812465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.812521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.812749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.812793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.812971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.813000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.813207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.813257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.813432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.813478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.813669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.813699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.813900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.813928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.814105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.814149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.814356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.814400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.814595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.814640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.814784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.814812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.815025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.815071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.815274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.815319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.815521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.815565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.815743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.815771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.815991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.816036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.816261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.816306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.816500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.816545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.816723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.816750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.816939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.816970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.817214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.817259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.817456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.817501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.817700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-07-15 22:48:40.817728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.535 qpair failed and we were unable to recover it. 00:24:57.535 [2024-07-15 22:48:40.817933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.817979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.818154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.818200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.818430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.818474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.818626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.818654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.818834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.818862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.819022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.819051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.819278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.819323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.819533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.819579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.819758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.819785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.820009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.820054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.820254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.820286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.820538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.820583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.820756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.820785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.820988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.821033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.821229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.821260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.821480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.821525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.821725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.821753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.821991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.822037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.822217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.822265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.822501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.822545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.822747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.822779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.822989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.823034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.823263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.823307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.823539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.823583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.823758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.823785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.824015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.824060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.824252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.824282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.824502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.824547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.824751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.824778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.824980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.825025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.825226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.825271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.825472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.825517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.825716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.825744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.825939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.825986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.826188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.826234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.826435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.826480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.826654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.826682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.826862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.826912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.827120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.827164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.827395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.827439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.827677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-07-15 22:48:40.827722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.536 qpair failed and we were unable to recover it. 00:24:57.536 [2024-07-15 22:48:40.827927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.827956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.828147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.828192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.828364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.828410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.828620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.828664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.828842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.828870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.829085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.829113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.829328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.829375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.829598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.829643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.829811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.829839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.830072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.830117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.830315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.830359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.830533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.830577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.830723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.830751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.830953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.830999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.831190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.831236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.831474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.831519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.831700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.831727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.831953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.831999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.832191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.832237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.832424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.832472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.832621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.832649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.832823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.832851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.833043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.833092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.833281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.833310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.833511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.833556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.833756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.833784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.833945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.833992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.834201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.834245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.834411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.834465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.834667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.834711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.834890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.834918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.835074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.835103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.835335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.835378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.835586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.835631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.835786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.835814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.836015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.836059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.537 qpair failed and we were unable to recover it. 00:24:57.537 [2024-07-15 22:48:40.836289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-07-15 22:48:40.836333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.836567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.836612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.836790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.836817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.837020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.837065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.837288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.837334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.837537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.837582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.837787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.837815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.837996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.838024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.838228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.838273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.838470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.838516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.838677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.838706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.838855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.838903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.839111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.839156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.839388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.839434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.839633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.839678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.839819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.839847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.840062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.840090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.840288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.840333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.840536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.840581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.840757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.840785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.840987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.841033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.841265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.841310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.841508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.841553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.841757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.841789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.842018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.842064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.842277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.842320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.842500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.842544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.842715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.842746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.842950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.842995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.843193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.843239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.843472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.843516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.843693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.843724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.843899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.843927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.844153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.844198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.844426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.844472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.844644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.844689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.844903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.844932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.845173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.845216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.845418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.845462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.845669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-07-15 22:48:40.845715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.538 qpair failed and we were unable to recover it. 00:24:57.538 [2024-07-15 22:48:40.845896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.845925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.846125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.846152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.846376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.846421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.846633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.846678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.846887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.846916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.847116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.847144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.847349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.847393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.847592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.847638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.847841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.847868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.848083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.848111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.848319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.848362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.848570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.848615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.848798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.848826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.849001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.849029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.849231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.849276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.849454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.849504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.849837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.849900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.850107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.850134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.850346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.850391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.850596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.850641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.850819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.850847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.851050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.851079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.851281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.851326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.851519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.851568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.851748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.851776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.851945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.851990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.852211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.852256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.852489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.852535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.852710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.852738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.852943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.852990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.853228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.853273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.853446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.853492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.853646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.853674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.853850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.853885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.854116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.854161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.854385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.854430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.854805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.854867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.855113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.855157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.855393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.855438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.855626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.855659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.539 [2024-07-15 22:48:40.855835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-07-15 22:48:40.855863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.539 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.856073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.856119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.856328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.856374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.856580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.856629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.856826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.856854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.857065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.857093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.857293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.857340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.857548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.857593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.857772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.857799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.857979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.858009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.858188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.858234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.858417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.858463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.858634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.858679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.858834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.858862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.859060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.859107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.859333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.859378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.859567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.859595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.859800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.859828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.860021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.860066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.860269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.860313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.860595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.860651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.860822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.860850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.861063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.861091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.861321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.861369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.861570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.861614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.861795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.861822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.862003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.862031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.862204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.862249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.862453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.862499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.862728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.862772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.862971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.863017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.863199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.863244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.863457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.863486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.863661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.863688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.863891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.863919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.864125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.864156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.864373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.864418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.864644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.864689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.864893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.864920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.865124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.865168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.865503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.865551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.540 [2024-07-15 22:48:40.865779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-07-15 22:48:40.865823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.540 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.865997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.866025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.866230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.866274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.866500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.866545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.866745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.866792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.866995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.867022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.867175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.867203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.867397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.867442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.867662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.867706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.867861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.867897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.868063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.868090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.868292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.868338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.868559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.868603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.868791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.868820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.868999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.869027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.869209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.869253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.869457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.869503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.869705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.869749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.869938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.869984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.870186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.870231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.870459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.870504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.870683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.870711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.870913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.870945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.871216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.871267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.871493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.871538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.871739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.871784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.871963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.871991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.872204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.872233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.872435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.872480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.872713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.872757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.872960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.873005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.873186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.873233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.873435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.873480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.873658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.873702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.873884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.873912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.874061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.874087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.874323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.874367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.874558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.874603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.874804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.874831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.875018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.875047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.875283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.875338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.541 [2024-07-15 22:48:40.875514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-07-15 22:48:40.875559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.541 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.875767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.875795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.875992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.876038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.876243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.876287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.876468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.876512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.876712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.876740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.876930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.876974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.877202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.877247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.877465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.877510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.877681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.877708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.877858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.877892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.878090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.878135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.878350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.878393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.878638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.878683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.878829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.878857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.879020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.879048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.879249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.879295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.879490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.879535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.879874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.879929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.880109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.880136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.880304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.880349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.880580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.880630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.880818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.880846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.881059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.881088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.881293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.881338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.881572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.881617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.881804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.881831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.882049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.882076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.882303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.882347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.882540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.882586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.882793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.882821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.883021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.883049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.883281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.883325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.883550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.883594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.883801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-07-15 22:48:40.883829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.542 qpair failed and we were unable to recover it. 00:24:57.542 [2024-07-15 22:48:40.884049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.884094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.884273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.884302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.884478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.884522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.884702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.884730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.884932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.884981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.885214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.885259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.885434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.885479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.885655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.885683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.885888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.885916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.886147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.886191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.886427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.886471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.886800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.886874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.887090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.887115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.887325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.887370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.887570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.887613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.887767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.887794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.887964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.887990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.888189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.888234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.888415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.888460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.888734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.888795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.888993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.889039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.889237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.889282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.889516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.889561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.889740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.889768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.889970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.890016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.890247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.890292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.890495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.890543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.890699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.890726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.890936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.890981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.891156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.891202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.891431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.891476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.891680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.891707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.891910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.891939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.892104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.892150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.892355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.892401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.892604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.892650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.892860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.892895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.893103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.893131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.893332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.893377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.893645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.893689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.893900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-07-15 22:48:40.893929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.543 qpair failed and we were unable to recover it. 00:24:57.543 [2024-07-15 22:48:40.894135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.894162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.894354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.894399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.894564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.894609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.894780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.894808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.894979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.895007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.895218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.895247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.895453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.895498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.895707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.895752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.895958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.896004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.896235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.896280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.896481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.896526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.896728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.896755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.896920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.896951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.897177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.897222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.897419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.897465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.897702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.897746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.897975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.898020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.898226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.898271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.898529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.898582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.898782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.898809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.899007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.899054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.899229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.899274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.899468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.899513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.899693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.899720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.899922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.899950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.900182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.900231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.900586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.900646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.900848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.900882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.901063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.901091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.901332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.901376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.901634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.901686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.901865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.901899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.902076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.902104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.902303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.902348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.902601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.902650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.902826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.902854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.903072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.903101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.903272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.903317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.903545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.903589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.903797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.903825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.544 [2024-07-15 22:48:40.904024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-07-15 22:48:40.904053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.544 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.904254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.904299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.904549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.904577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.904755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.904784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.904982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.905027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.905230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.905274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.905530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.905576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.905784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.905812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.906020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.906065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.906241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.906290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.906522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.906566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.906720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.906748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.906944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.906991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.907220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.907264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.907500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.907544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.907724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.907751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.907986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.908031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.908239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.908284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.908533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.908586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.908795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.908823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.908998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.909042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.909246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.909291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.909672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.909729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.909962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.910006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.910246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.910290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.910514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.910563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.910709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.910736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.910929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.910957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.911124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.911170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.911361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.911406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.911632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.911676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.911874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.911912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.912114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.912145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.912338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.912383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.912578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.912622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.912818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.912846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.913054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.913099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.913327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.913371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.913579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.913624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.913839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.913866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.914087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.914132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.545 [2024-07-15 22:48:40.914357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-07-15 22:48:40.914402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.545 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.914655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.914707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.914925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.914956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.915146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.915174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.915370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.915415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.915618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.915663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.915836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.915863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.916078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.916106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.916306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.916352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.916638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.916696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.916870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.916905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.917113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.917140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.917345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.917391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.917589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.917634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.917809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.917837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.918024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.918053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.918228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.918258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.918506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.918551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.918756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.918784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.918957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.918986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.919183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.919230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.919479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.919530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.919732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.919760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.919961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.920007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.920201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.920232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.920530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.920576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.920776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.920803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.921003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.921050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.921246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.921290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.921486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.921516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.921708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.921736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.921931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.921976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.922148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.922193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.922389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.922434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.922629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.922660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.546 [2024-07-15 22:48:40.922883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-07-15 22:48:40.922912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.546 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.923089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.923116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.923312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.923361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.923596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.923640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.923843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.923871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.924093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.924121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.924285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.924329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.924507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.924552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.924750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.924777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.924979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.925008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.925237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.925281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.925510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.925555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.925761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.925788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.925961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.925989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.926199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.926228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.926421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.926466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.926658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.926693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.926896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.926924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.927117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.927166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.927377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.927422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.927651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.927694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.927908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.927936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.928103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.928148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.928375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.928420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.928592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.928637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.928840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.928867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.929079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.929107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.929332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.929377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.929604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.929648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.929823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.929852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.930073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.930118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.930363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.930409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.930605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.930650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.930805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.930832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.931043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.931071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.931275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.931320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.931526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.931571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.931723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.931751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.931976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.932023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.932225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.932270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.932466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.932511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.547 qpair failed and we were unable to recover it. 00:24:57.547 [2024-07-15 22:48:40.932688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-07-15 22:48:40.932715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.932898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.932944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.933156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.933201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.933431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.933476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.933681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.933708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.933884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.933912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.934058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.934086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.934282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.934327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.934526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.934571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.934781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.934808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.935010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.935038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.935244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.935274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.935495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.935542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.935721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.935749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.935974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.936020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.936212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.936262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.936436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.936481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.936660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.936706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.936888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.936927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.937126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.937158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.937411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.937456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.937764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.937810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.938035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.938081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.938283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.938328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.938547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.938591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.938776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.938804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.938995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.939041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.939239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.939284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.939491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.939536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.939696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.939723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.939923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.939952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.940131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.940174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.940338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.940383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.940592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.940637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.940839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.940867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.941056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.941084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.941257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.941301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.941533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.941578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.941787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.941815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.942018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.942047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.942225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-07-15 22:48:40.942271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.548 qpair failed and we were unable to recover it. 00:24:57.548 [2024-07-15 22:48:40.942447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.942494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.942701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.942729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.942946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.942974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.943204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.943249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.943567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.943631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.943828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.943856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.944031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.944076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.944275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.944320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.944500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.944545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.944725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.944753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.944949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.944999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.945175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.945220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.945446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.945491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.945637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.945665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.945821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.945855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.946010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.946037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.946231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.946276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.946503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.946548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.946755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.946782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.947004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.947049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.947276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.947321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.947548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.947592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.947779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.947806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.948012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.948057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.948257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.948303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.948479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.948523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.948725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.948752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.948939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.948967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.949177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.949221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.949428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.949472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.949703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.949748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.949949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.949980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.950184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.950230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.950474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.950519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.950731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.950758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.950945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.950976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.951189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.951232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.951468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.951512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.951687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.951715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.951867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.951903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.549 [2024-07-15 22:48:40.952120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-07-15 22:48:40.952168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.549 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.952342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.952373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.952623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.952668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.952849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.952885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.953125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.953178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.953409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.953455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.953662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.953707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.953884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.953912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.954090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.954147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.954373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.954419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.954594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.954644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.954845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.954872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.955055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.955099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.955280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.955326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.955555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.955604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.955766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.955794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.955971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.955999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.956232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.956276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.956485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.956529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.956704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.956733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.956888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.956916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.957147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.957192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.957374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.957418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.957600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.957645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.957854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.957888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.958044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.958071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.958307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.958351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.958585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.958631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.958782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.958810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.959009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.959051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.959255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.959288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.959509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.959539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.959696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.959727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.959896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.959933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.960137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.960166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.960339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-07-15 22:48:40.960369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.550 qpair failed and we were unable to recover it. 00:24:57.550 [2024-07-15 22:48:40.960562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.960589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.960736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.960763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.960936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.960963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.961111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.961138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.961285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.961312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.961494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.961531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.961703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.961730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.961911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.961947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.962149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.962176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.962392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.962419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.962620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.962647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.962856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.962891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.963081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.963107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.963282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.963309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.963456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.963484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.963677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.963704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.963889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.963917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.964096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.964122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.964330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.964357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.964539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.964566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.964721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.964747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.964947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.964974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.965129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.965156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.965332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.965359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.965594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.965637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.965840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.965866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.966052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.966078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.966261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.966288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.966465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.966494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.966714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.966743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.966903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.966956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.967157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.967200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.967360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.967394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.967584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.967613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.967808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.967837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.968064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.968091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.968275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.968305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.968487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.968514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.968685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.968715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.968917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.968960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.969136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.969177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.551 qpair failed and we were unable to recover it. 00:24:57.551 [2024-07-15 22:48:40.969401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-07-15 22:48:40.969436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.969636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.969677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.969846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.969883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.970077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.970103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.970306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.970333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.970551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.970581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.970806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.970836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.971038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.971065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.971274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.971301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.971472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.971502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.971666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.971696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.971928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.971955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.972101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.972127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.972273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.972315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.972503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.972532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.972721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.972750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.972920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.972947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.973097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.973125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.973312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.973345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.973536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.973565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.973822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.973851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.974060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.974086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.974288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.974315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.974488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.974518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.974734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.974763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.974939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.974966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.975145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.975187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.975380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.975409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.975605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.975631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.975794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.975823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.975995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.976022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.976180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.976206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.976385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.976415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.976608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.976637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.976807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.976836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.977069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.977096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.977274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.977301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.977485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.977515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.977709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.977738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.977960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.977987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.978162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.978188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.978373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-07-15 22:48:40.978402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.552 qpair failed and we were unable to recover it. 00:24:57.552 [2024-07-15 22:48:40.978589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.978618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.978830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.978859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.979032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.979058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.979261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.979288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.979486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.979515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.979710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.979740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.979960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.979988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.980202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.980231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.980423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.980452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.980640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.980669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.980869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.980900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.981081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.981108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.981282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.981308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.981513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.981539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.981715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.981741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.981949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.981976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.982123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.982149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.982393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.982427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.982635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.982662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.982825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.982854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.983055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.983081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.983235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.983261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.983408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.983435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.983636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.983666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.983859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.983909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.984104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.984130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.984348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.984376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.984547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.984576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.984768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.984797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.984973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.985000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.985143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.985169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.985341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.985367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.985553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.985582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.985789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.985818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.985993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.986020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.986171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.986198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.986399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.986425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.986653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.986682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.986852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.986883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.987063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.987089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.987324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-07-15 22:48:40.987350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.553 qpair failed and we were unable to recover it. 00:24:57.553 [2024-07-15 22:48:40.987519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.987546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.987723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.987751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.987966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.987993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.988172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.988198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.988394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.988423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.988628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.988655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.988830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.988857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.989030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.989056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.989256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.989283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.989420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.989447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.989639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.989668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.989870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.989903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.990084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.990110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.990297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.990324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.990510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.990539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.990724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.990753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.990990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.991018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.991165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.991195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.991333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.991360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.991553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.991583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.991779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.991808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.991997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.992024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.992204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.992230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.992413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.992440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.992605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.992634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.992851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.992884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.993054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.993081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.993281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.993312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.993513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.993543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.993765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.993791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.993994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.994020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.994179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.994205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.994393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.994419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.994589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.994616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.994813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.994840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.995053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.995080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.995308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.995337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.995508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.995534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.995727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.995754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.995961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.995991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.996158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.996189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.554 [2024-07-15 22:48:40.996387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-07-15 22:48:40.996414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.554 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.996581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.996610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.996781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.996811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.996997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.997032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.997223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.997250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.997448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.997477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.997623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.997652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.997842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.997872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.998105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.998131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.998286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.998312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.998536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.998564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.998757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.998787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.998967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.998994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.999194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.999221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.999397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.999428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.999617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.999646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:40.999848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:40.999875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.000086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.000128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.000319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.000345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.000504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.000531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.000733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.000759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.000921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.000951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.001136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.001166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.001390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.001416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.001565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.001591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.001758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.001784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.002048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.002078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.002266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.002295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.002486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.002513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.002657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.002683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.002860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.002921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.003091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.003120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.003286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.003313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.003493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.003520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.003711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.003740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.003931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-07-15 22:48:41.003961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.555 qpair failed and we were unable to recover it. 00:24:57.555 [2024-07-15 22:48:41.004148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.004175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.004325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.004352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.004527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.004571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.004734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.004764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.004965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.004992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.005223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.005252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.005449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.005479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.005682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.005708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.005852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.005890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.006089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.006118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.006319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.006348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.006519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.006548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.006749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.006776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.006955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.006983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.007149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.007179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.007342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.007371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.007563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.007589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.007811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.007840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.008075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.008101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.008279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.008305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.556 [2024-07-15 22:48:41.008459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-07-15 22:48:41.008486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.556 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.008679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.008710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.008889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.008919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.009140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.009169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.009345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.009372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.009572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.009598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.009773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.009802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.009987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.010017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.010190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.010217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.010370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.010412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.010601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.010631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.010810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.010839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.011048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.011074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.835 [2024-07-15 22:48:41.011275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-07-15 22:48:41.011304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.835 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.011502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.011531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.011713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.011742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.011936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.011963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.012158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.012188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.012390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.012419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.012619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.012648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.012849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.012884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.013028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.013054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.013231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.013257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.013430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.013460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.013683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.013710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.013886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.013913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.014101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.014128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.014274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.014300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.014499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.014525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.014749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.014778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.014959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.014989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.015176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.015206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.015421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.015448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.015626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.015655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.015847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.015886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.016086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.016116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.016312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.016338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.016537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.016566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.016746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.016775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.016997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.017026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.017199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.017226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.017421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.017451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.017650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.017680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.017867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.017903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.018100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.018126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.018279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.018306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.018489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.018516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.018679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.018708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.018896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.018923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.019115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.019145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.019368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.019395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.019587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.019617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.019802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.019831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.020027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.020054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.836 qpair failed and we were unable to recover it. 00:24:57.836 [2024-07-15 22:48:41.020253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-07-15 22:48:41.020282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.020499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.020528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.020748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.020803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.021038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.021066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.021237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.021266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.021482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.021509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.021680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.021706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.021866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.021919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.022100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.022126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.022291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.022317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.022516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.022542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.022742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.022771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.022977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.023004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.023146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.023172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.023350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.023377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.023574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.023603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.023802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.023831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.024041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.024068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.024240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.024266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.024489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.024519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.024719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.024747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.024960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.024986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.025173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.025199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.025400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.025426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.025646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.025675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.025870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.025908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.026111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.026137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.026300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.026330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.026549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.026575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.026778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.026804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.027055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.027082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.027296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.027323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.027523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.027550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.027730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.027760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.027932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.027959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.028154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.028184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.028403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.028433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.028633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.028659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.028871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.028920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.029153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.029182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.029389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.029415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.837 [2024-07-15 22:48:41.029589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-07-15 22:48:41.029615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.837 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.029791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.029818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.030041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.030071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.030289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.030318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.030509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.030538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.030722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.030748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.030971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.031001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.031191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.031220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.031415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.031445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.031669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.031695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.031843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.031869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.032030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.032056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.032255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.032284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.032500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.032526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.032698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.032725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.032898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.032925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.033115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.033144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.033367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.033393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.033543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.033569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.033792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.033821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.034021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.034048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.034203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.034231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.034428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.034454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.034653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.034679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.034885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.034915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.035112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.035140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.035347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.035376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.035536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.035565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.035758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.035785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.035989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.036020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.036220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.036249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.036407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.036436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.036648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.036678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.036915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.036942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.037115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.037144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.037310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.037339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.037543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.037569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.037767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.037793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.037971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.038000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.038221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.038247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.038420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-07-15 22:48:41.038449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.838 qpair failed and we were unable to recover it. 00:24:57.838 [2024-07-15 22:48:41.038641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.038668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.038820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.038846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.039027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.039053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.039294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.039320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.039527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.039554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.039732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.039758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.039952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.039982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.040167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.040196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.040421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.040447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.040615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.040641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.040833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.040862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.041067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.041094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.041294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.041320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.041519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.041549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.041768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.041797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.041989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.042019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.042224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.042251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.042446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.042475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.042668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.042697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.042897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.042926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.043099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.043126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.043352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.043381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.043578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.043607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.043800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.043829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.044060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.044088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.044288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.044317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.044485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.044514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.044731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.044760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.044955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.044983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.045179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.045212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.045430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.045459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.045626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.045655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.045846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.045872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.046050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.046076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.046242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.046271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.046465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.046494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.839 qpair failed and we were unable to recover it. 00:24:57.839 [2024-07-15 22:48:41.046692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-07-15 22:48:41.046718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.046911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.046938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.047142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.047168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.047373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.047402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.047588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.047614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.047813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.047842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.048034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.048060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.048237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.048263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.048496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.048522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.048748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.048774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.048943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.048970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.049203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.049232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.049423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.049449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.049674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.049703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.049904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.049931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.050126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.050155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.050356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.050382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.050556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.050582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.050756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.050785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.050980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.051013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.051236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.051269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.051510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.051536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.051691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.051719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.051918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.051949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.052171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.052197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.052390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.052419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.052611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.052640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.052825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.052854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.053086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.053113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.053340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.053369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.053543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.053572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.053735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.053764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.053966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.053993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.054138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.054165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.054371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.054400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.054566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.054595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.054785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-07-15 22:48:41.054811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.840 qpair failed and we were unable to recover it. 00:24:57.840 [2024-07-15 22:48:41.054990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.055017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.055187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.055214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.055412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.055441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.055667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.055693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.055899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.055929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.056111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.056140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.056327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.056356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.056553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.056579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.056800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.056829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.057045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.057075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.057267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.057298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.057501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.057527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.057721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.057751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.057941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.057972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.058173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.058199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.058406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.058432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.058584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.058610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.058757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.058802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.059024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.059051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.059250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.059276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.059507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.059533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.059704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.059730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.059924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.059954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.060179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.060205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.060382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.060415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.060584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.060613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.060798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.060824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.061023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.061051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.061214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.061243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.061416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.061445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.061628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.061656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.061861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.061897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.062122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.062151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.062376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.062405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.062600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.062629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.062818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.062844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.063053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.063079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.063254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.063282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.063506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.063535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.063763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.063789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.063982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.064012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.841 [2024-07-15 22:48:41.064172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-07-15 22:48:41.064201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.841 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.064394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.064420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.064562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.064588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.064785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.064811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.064988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.065017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.065190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.065218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.065411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.065437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.065636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.065664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.065858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.065895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.066123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.066150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.066300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.066329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.066551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.066580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.066786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.066813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.067017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.067044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.067281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.067307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.067448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.067475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.067651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.067677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.067829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.067856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.068088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.068129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.068314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.068343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.068546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.068590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.068818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.068862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.069054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.069082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.069230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.069257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.069420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.069447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.069623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.069649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.069848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.069875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.070059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.070086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.070285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.070330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.070560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.070603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.070782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.070808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.070988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.071015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.071214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.071260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.071494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.071538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.071949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.071977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.072210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.072255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.072425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.072472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.072677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.072727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.072948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.072976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.073205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.073248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.073473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.073517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.842 [2024-07-15 22:48:41.073721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-07-15 22:48:41.073765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.842 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.073990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.074034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.074240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.074269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.074502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.074547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.074726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.074752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.074975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.075019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.075203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.075230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.075384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.075412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.075634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.075678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.075848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.075875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.076080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.076125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.076356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.076383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.076562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.076589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.076762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.076789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.076993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.077042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.077228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.077255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.077455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.077499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.077664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.077708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.077853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.077887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.078071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.078098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.078329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.078372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.078566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.078611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.078813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.078840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.079058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.079086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.079285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.079331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.079481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.079509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.079683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.079711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.079911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.079939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.080117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.080164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.080391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.080434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.080587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.080615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.080788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.080816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.081045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.081090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.081317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.081361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.081594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.081639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.081797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.081824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.082055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.082104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.082337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.082381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.082559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.082608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.082781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.082808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.083023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.083069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.843 qpair failed and we were unable to recover it. 00:24:57.843 [2024-07-15 22:48:41.083282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-07-15 22:48:41.083325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.083527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.083571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.083768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.083794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.083968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.084013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.084255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.084282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.084472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.084516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.084714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.084759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.084964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.084992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.085142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.085169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.085348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.085391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.085583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.085627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.085803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.085830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.086033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.086077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.086275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.086319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.086520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.086563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.086738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.086765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.086988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.087033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.087236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.087279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.087510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.087553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.087701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.087728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.087939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.087967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.088196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.088239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.088468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.088513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.088692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.088719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.088921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.088948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.089152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.089197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.089421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.089466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.089672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.089716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.089927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.089955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.090131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.090175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.090382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.090428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.090613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.090657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.090832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.090859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.091102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.091146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.091385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.091429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.844 [2024-07-15 22:48:41.091635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-07-15 22:48:41.091682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.844 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.091857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.091894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.092099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.092145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.092349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.092393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.092604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.092648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.092828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.092855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.093052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.093080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.093308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.093352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.093556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.093600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.093784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.093811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.094013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.094040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.094210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.094254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.094484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.094528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.094762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.094789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.094973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.095001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.095230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.095272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.095479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.095508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.095674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.095702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.095906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.095933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.096106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.096153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.096333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.096360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.096508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.096535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.096737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.096764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.096990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.097034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.097265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.097308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.097508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.097553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.097732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.097759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.097987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.098032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.098250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.098278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.098477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.098521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.098697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.098724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.098892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.098918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.099130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.099158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.099356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.099400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.099612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.099656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.099881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.099908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.100082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.100109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.100315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.100360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.100567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.100611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.100813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.100840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.101012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.101047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.845 [2024-07-15 22:48:41.101251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-07-15 22:48:41.101296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.845 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.101500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.101526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.101749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.101794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.101960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.101988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.102201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.102229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.102434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.102478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.102655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.102682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.102888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.102916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.103125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.103155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.103375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.103420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.103630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.103657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.103860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.103895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.104074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.104101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.104334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.104379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.104591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.104618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.104820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.104847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.105041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.105069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.105279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.105309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.105551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.105594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.105800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.105827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.106009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.106037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.106230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.106274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.106496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.106540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.106718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.106762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.106944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.106971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.107139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.107184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.107389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.107433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.107662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.107705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.107915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.107942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.108113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.108158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.108367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.108394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.108596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.108640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.108843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.108869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.109058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.109084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.109316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.109360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.109561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.109605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.109805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.109832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.109990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.110017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.110245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.110288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.110503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.110534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.110712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-07-15 22:48:41.110738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.846 qpair failed and we were unable to recover it. 00:24:57.846 [2024-07-15 22:48:41.110968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.111012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.111220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.111263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.111497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.111524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.111719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.111746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.111888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.111916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.112086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.112130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.112333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.112377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.112607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.112651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.112805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.112831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.113044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.113089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.113292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.113335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.113506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.113550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.113735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.113763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.113961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.114005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.114234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.114276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.114508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.114553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.114700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.114728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.114924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.114970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.115197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.115242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.115435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.115464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.115690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.115717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.115896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.115924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.116108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.116155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.116394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.116436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.116635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.116679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.116891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.116918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.117096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.117141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.117346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.117389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.117617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.117661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.117844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.117870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.118057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.118084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.118279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.118328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.118554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.118597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.118776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.118803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.118987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.119016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.119191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.119218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.119389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.119415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.119555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.119582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.119783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.119814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.120016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.120061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.120270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-07-15 22:48:41.120313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.847 qpair failed and we were unable to recover it. 00:24:57.847 [2024-07-15 22:48:41.120506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.120550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.120751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.120778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.120973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.121018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.121244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.121289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.121517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.121560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.121705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.121733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.121927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.121972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.122173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.122218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.122421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.122465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.122637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.122664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.122810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.122837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.123044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.123089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.123293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.123336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.123561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.123606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.123785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.123812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.124019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.124065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.124270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.124315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.124489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.124533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.124736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.124762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.124941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.124968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.125171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.125215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.125393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.125438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.125665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.125709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.125860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.125893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.126045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.126072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.126269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.126315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.126551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.126596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.126773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.126801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.127028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.127074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.127317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.127344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.127535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.127562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.127763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.127789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.127992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.128038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.128204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.128249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.128485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-07-15 22:48:41.128529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.848 qpair failed and we were unable to recover it. 00:24:57.848 [2024-07-15 22:48:41.128708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.128734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.128909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.128955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.129147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.129195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.129375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.129418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.129581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.129624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.129767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.129795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.130021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.130065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.130297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.130341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.130571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.130616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.130798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.130825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.131024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.131069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.131218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.131245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.131418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.131462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.131631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.131658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.131806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.131833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.132015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.132059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.132293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.132337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.132519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.132566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.132771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.132797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.133019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.133064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.133297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.133340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.133579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.133624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.133802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.133828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.134008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.134035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.134214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.134258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.134496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.134540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.134745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.134772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.134966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.135017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.135217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.135261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.135493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.135537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.135692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.135720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.135895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.135922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.136122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.136169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.136403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.136448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.136656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.136699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.136910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.136938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.137124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.137151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.137394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.137421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.137627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.137674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.137849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.137883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.138063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-07-15 22:48:41.138090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.849 qpair failed and we were unable to recover it. 00:24:57.849 [2024-07-15 22:48:41.138241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.138268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.138442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.138473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.138671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.138714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.138895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.138923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.139151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.139194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.139430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.139475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.139679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.139722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.139925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.139952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.140104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.140131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.140336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.140379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.140604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.140646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.140851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.140886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.141066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.141094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.141321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.141364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.141563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.141606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.141814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.141841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.142048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.142076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.142311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.142338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.142484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.142511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.142701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.142730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.142929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.142957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.143193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.143237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.143404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.143449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.143654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.143698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.143839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.143866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.144073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.144117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.144310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.144354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.144553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.144598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.144778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.144805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.144993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.145046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.145280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.145323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.145495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.145541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.145745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.145772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.145926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.145954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.146150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.146193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.146397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.146442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.146618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.146646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.146823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.146849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.147058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.147102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.147280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.147325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.850 [2024-07-15 22:48:41.147529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-07-15 22:48:41.147574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.850 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.147731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.147757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.147984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.148030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.148260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.148304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.148533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.148576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.148748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.148774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.148968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.149016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.149253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.149298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.149492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.149535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.149687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.149714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.149890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.149934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.150139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.150183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.150421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.150465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.150642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.150670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.150882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.150909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.151146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.151189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.151397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.151441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.151618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.151661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.151828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.151855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.152050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.152078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.152260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.152303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.152536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.152580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.152760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.152786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.152964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.152992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.153192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.153236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.153417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.153462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.153619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.153647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.153849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.153884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.154088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.154136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.154339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.154383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.154557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.154601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.154803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.154829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.155068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.155113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.155275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.155319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.155521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.155565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.155765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.155792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.155970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.156015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.156221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.156264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.156487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.156531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.156709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.156737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.156970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.157014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.851 [2024-07-15 22:48:41.157249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-07-15 22:48:41.157275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.851 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.157458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.157485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.157672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.157699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.157901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.157928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.158135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.158162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.158382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.158426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.158633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.158678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.158885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.158912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.159064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.159091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.159283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.159327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.159519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.159562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.159737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.159763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.159941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.159968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.160198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.160240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.160478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.160523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.160755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.160799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.161010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.161055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.161277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.161320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.161488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.161531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.161709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.161736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.161930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.161975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.162193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.162238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.162472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.162517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.162698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.162724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.162934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.162962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.163137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.163181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.163354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.163384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.163595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.163644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.163803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.163830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.164025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.164070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.164309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.164336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.164523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.164549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.164703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.164730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.164901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.164929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.165132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-07-15 22:48:41.165176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.852 qpair failed and we were unable to recover it. 00:24:57.852 [2024-07-15 22:48:41.165406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.165450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.165659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.165686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.165861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.165894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.166149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.166176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.166382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.166426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.166652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.166697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.166906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.166934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.167115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.167141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.167348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.167396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.167543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.167570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.167768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.167795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.167972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.167999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.168190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.168234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.168427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.168471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.168681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.168708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.168906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.168934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.169138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.169183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.169384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.169429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.169597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.169642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.169827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.169854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.170049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.170076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.170310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.170354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.170563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.170607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.170805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.170831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.171037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.171065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.171293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.171337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.171586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.171612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.171791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.171817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.172021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.172048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.172209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.172253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.172418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.172462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.172693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.172737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.172931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.172966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.173231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.173257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.173422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.173452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.173644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.173670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.173824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.173851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.174089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.174132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.174351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.174395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.174600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.174643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.174818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-07-15 22:48:41.174845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.853 qpair failed and we were unable to recover it. 00:24:57.853 [2024-07-15 22:48:41.175023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.175050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.175233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.175277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.175516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.175560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.175712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.175738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.175910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.175937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.176130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.176160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.176348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.176392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.176638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.176682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.176849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.176894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.177136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.177163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.177346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.177373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.177547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.177575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.177750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.177777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.178006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.178051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.178250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.178294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.178492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.178536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.178710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.178737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.178915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.178943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.179187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.179231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.179439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.179483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.179724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.179768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.179967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.180011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.180213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.180257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.180448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.180477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.180686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.180729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.180918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.180948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.181163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.181206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.181440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.181484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.181697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.181724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.181923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.181950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.182141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.182168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.182396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.182444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.182678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.182721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.182890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.182918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.183072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.183098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.183327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.183371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.183546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.183592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.183796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.183823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.184003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.184030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.184178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.184205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.184358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-07-15 22:48:41.184384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.854 qpair failed and we were unable to recover it. 00:24:57.854 [2024-07-15 22:48:41.184587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.184613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.184780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.184806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.184998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.185043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.185272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.185316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.185499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.185544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.185719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.185746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.185903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.185930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.186129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.186175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.186405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.186449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.186677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.186721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.186897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.186924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.187160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.187204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.187413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.187456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.187691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.187735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.187915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.187942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.188120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.188164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.188373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.188400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.188636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.188680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.188842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.188869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.189051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.189095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.189324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.189367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.189567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.189596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.189793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.189820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.190027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.190054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.190204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.190231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.190420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.190464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.190658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.190701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.190886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.190913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.191087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.191113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.191311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.191357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.191556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.191605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.191788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.191815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.191989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.192016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.192244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.192289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.192498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.192542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.192688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.192714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.192899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.192926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.193067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.193095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.193277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.193320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.193513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.193543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.193734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-07-15 22:48:41.193760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.855 qpair failed and we were unable to recover it. 00:24:57.855 [2024-07-15 22:48:41.193966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.194012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.194214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.194258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.194458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.194502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.194688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.194717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.194895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.194923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.195128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.195172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.195376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.195420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.195626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.195671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.195845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.195871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.196036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.196063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.196234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.196279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.196484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.196527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.196709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.196735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.196924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.196954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.197140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.197184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.197378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.197422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.197647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.197696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.197883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.197910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.198055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.198083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.198307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.198352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.198544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.198571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.198719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.198746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.198927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.198957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.199169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.199199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.199416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.199461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.199691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.199734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.200018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.200063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.200245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.200272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.200445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.200471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.200702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.200749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.200901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.200929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.201106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.201151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.201390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.201434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.201654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.201699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.201871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.201918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.202121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.202166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.202369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.202413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.202644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.202688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.202869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.202906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.203085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.203111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.203335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-07-15 22:48:41.203365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.856 qpair failed and we were unable to recover it. 00:24:57.856 [2024-07-15 22:48:41.203555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.203602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.203752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.203779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.204001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.204028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.204261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.204304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.204510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.204554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.204759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.204786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.204973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.205000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.205172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.205217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.205428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.205455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.205633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.205659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.205823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.205850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.206096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.206141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.206358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.206384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.206605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.206649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.206852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.206887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.207094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.207121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.207295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.207339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.207577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.207621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.207827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.207854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.208013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.208040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.208278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.208304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.208510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.208555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.208733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.208759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.208906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.208933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.209167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.209193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.209399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.209443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.209679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.209723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.209869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.209903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.210079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.210111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.210283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.210327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.210557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.210601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.210804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.210831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.210984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.211012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.211222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-07-15 22:48:41.211249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.857 qpair failed and we were unable to recover it. 00:24:57.857 [2024-07-15 22:48:41.211448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.211492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.211687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.211731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.211933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.211977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.212158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.212200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.212375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.212419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.212593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.212636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.212812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.212839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.213051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.213081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.213333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.213377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.213546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.213591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.213764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.213791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.213999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.214043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.214272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.214316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.214494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.214540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.214720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.214746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.214964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.215009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.215179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.215224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.215446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.215490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.215669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.215697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.215872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.215906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.216073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.216118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.216349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.216394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.216603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.216647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.216853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.216886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.217066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.217111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.217341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.217385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.217607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.217651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.217830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.217856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.218093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.218137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.218364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.218407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.218640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.218684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.218892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.218936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.219135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.219180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.219363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.219409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.219590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.219640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.219817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.219844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.220030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.220058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.220255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.220298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.220534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.220578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.220790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.220816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.221022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-07-15 22:48:41.221049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.858 qpair failed and we were unable to recover it. 00:24:57.858 [2024-07-15 22:48:41.221211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.221255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.221456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.221500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.221686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.221714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.221915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.221943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.222168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.222213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.222416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.222460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.222691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.222735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.222930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.222974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.223151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.223198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.223430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.223474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.223710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.223754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.223970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.224013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.224251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.224295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.224541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.224585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.224787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.224813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.225020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.225065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.225288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.225332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.225497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.225540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.225715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.225743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.225963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.226009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.226211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.226255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.226473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.226517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.226728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.226754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.226978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.227022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.227221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.227265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.227496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.227540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.227714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.227740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.227943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.227987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.228156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.228199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.228396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.228441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.228667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.228712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.228889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.228916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.229097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.229124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.229324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.229372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.229601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.229645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.229849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.229894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.230052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.230079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.230287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.230332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.230554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.230599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.230751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.230779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.859 [2024-07-15 22:48:41.231005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-07-15 22:48:41.231049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.859 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.231264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.231308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.231510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.231555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.231758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.231784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.231982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.232031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.232258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.232302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.232509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.232553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.232743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.232769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.233003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.233047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.233253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.233298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.233491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.233535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.233684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.233711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.233900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.233928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.234134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.234161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.234389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.234433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.234644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.234672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.234872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.234912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.235117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.235144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.235346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.235390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.235584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.235627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.235811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.235838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.236051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.236078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.236276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.236320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.236543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.236586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.236789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.236816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.236998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.237026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.237232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.237275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.237501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.237545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.237711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.237738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.237939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.237983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.238182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.238226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.238436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.238480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.238686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.238730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.238903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.238952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.239151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.239195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.239421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.239465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.239661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.239691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.239906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.239933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.240169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.240213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.240405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.240450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.240617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.240659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.860 [2024-07-15 22:48:41.240811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-07-15 22:48:41.240839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.860 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.241023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.241068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.241278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.241322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.241559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.241603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.241779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.241805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.241985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.242030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.242250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.242278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.242475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.242520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.242687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.242714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.242900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.242927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.243105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.243132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.243353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.243397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.243637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.243681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.243859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.243892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.244065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.244092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.244291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.244336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.244566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.244610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.244817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.244843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.245059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.245086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.245285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.245331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.245567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.245599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.245794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.245824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.246023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.246052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.246264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.246307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.246499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.246529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.246722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.246749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.246950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.246977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.247142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.247169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.247334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.247363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.247560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.247591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.247781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.247810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.248013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.248040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.248216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.248251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.248435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.248464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.248682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-07-15 22:48:41.248711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.861 qpair failed and we were unable to recover it. 00:24:57.861 [2024-07-15 22:48:41.248889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.248926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.249101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.249128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.249342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.249372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.249567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.249596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.249803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.249833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.250036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.250063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.250232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.250262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.250452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.250482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.250723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.250772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.251001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.251028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.251230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.251259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.251448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.251478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.251751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.251802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.252002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.252029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.252220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.252249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.252444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.252472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.252675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.252704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.252902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.252946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.253168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.253198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.253377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.253407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.253699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.253749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.253978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.254005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.254200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.254230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.254429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.254459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.254683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.254713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.254919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.254946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.255100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.255127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.255343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.255372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.255641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.255691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.255925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.255952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.256135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.256179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.256398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.256427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.256803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.256857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.257057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.257084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.257263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.257293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.257500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.257531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.257745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.257775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.257978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.258010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.258208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.258239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.258459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.258489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.258710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-07-15 22:48:41.258737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.862 qpair failed and we were unable to recover it. 00:24:57.862 [2024-07-15 22:48:41.258952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.258983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.259143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.259174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.259372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.259399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.259576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.259604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.259804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.259834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.260064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.260091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.260288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.260318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.260537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.260564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.260760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.260789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.260992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.261019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.261257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.261287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.261518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.261544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.261745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.261774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.261982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.262009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.262208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.262234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.262442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.262468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.262656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.262685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.262882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.262912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.263106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.263135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.263374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.263400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.263638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.263667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.263860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.263897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.264088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.264117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.264336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.264363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.264565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.264596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.264766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.264795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.265015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.265045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.265236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.265263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.265493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.265523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.265701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.265728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.265918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.265949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.266150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.266178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.266321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.266349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.266518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.266545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.266746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.266773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.266987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.267014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.267218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.267248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.267425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.267455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.267647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.267677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.267850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.267883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.268084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-07-15 22:48:41.268113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.863 qpair failed and we were unable to recover it. 00:24:57.863 [2024-07-15 22:48:41.268310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.268339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.268555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.268584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.268772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.268799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.269014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.269044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.269238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.269269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.269491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.269521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.269754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.269781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.269974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.270004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.270191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.270221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.270442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.270472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.270634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.270660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.270816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.270843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.271053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.271080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.271279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.271309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.271481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.271509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.271672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.271699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.271896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.271941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.272093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.272119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.272264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.272291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.272429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.272456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.272649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.272678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.272870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.272908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.273108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.273139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.273337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.273367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.273584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.273614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.273836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.273863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.274095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.274123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.274352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.274382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.274578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.274608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.274828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.274857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.275058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.275085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.275289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.275319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.275549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.275579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.275748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.275779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.276003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.276030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.276236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.276265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.276503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.276530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.276739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.276781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.276983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.277011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.277213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-07-15 22:48:41.277242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.864 qpair failed and we were unable to recover it. 00:24:57.864 [2024-07-15 22:48:41.277435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.277464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.277660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.277689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.277887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.277915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.278121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.278151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.278374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.278404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.278612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.278639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.278813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.278840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.279048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.279078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.279299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.279329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.279532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.279562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.279756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.279783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.280008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.280038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.280268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.280294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.280519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.280549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.280726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.280753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.280974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.281004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.281234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.281264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.281452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.281483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.281655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.281682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.281853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.281902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.282101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.282131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.282309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.282339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.282508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.282539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.282715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.282742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.282910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.282941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.283152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.283182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.283378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.283405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.283620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.283649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.283841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.283871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.284073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.284103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.284327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.284354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.284519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.284549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.284767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.284797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.284993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.285024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.285222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-07-15 22:48:41.285250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.865 qpair failed and we were unable to recover it. 00:24:57.865 [2024-07-15 22:48:41.285450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.285479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.285703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.285734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.285923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.285954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.286176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.286203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.286402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.286432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.286585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.286615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.286834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.286864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.287075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.287102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.287336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.287366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.287554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.287585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.287805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.287835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.288054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.288081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.288305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.288334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.288528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.288555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.288735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.288762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.288938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.288966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.289143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.289172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.289366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.289395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.289614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.289643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.289842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.289869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.290101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.290131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.290355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.290384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.290553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.290582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.290796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.290823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.291034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.291064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.291259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.291289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.291480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.291510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.291688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.291719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.291868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.291902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.292083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.292112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.292337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.292367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.292561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.292588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.292814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.292844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.293058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.293085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.293283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.293312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.293513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.293540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.293737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.293766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.293995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.294023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.294177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.294205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.866 [2024-07-15 22:48:41.294406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-07-15 22:48:41.294432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.866 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.294634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.294664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.294859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.294897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.295093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.295120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.295273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.295303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.295531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.295561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.295754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.295784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.295987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.296015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.296191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.296218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.296363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.296391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.296615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.296645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.296838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.296868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.297049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.297077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.297235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.297262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.297443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.297469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.297707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.297735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.297947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.297975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.298192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.298220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.298444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.298474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.298658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.298688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.298896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.298923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.299102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.299129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.299334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.299363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.299563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.299590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.299742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.299769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.299966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.299996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.300186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.300217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.300410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.300440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.300614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.300647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.300868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.300906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.301095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.301124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.301346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.301376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.301573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.301599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.301820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.301849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.302088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.302115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.302325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.302355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.302554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.302581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.302780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.302810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.303004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.303034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.303229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.303260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.867 [2024-07-15 22:48:41.303458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-07-15 22:48:41.303485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.867 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.303682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.303712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.303914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.303944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.304135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.304164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.304393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.304420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.304612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.304641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.304863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.304899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.305121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.305151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.305324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.305350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.305545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.305575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.305792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.305822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.305982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.306012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.306179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.306206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.306380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.306409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.306627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.306656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.306823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.306852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.307054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.307081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.307304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.307333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.307563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.307593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.307788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.307817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.308017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.308044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.308207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.308237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.308461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.308491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.308684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.308714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.308936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.308963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.309141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.309170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.309387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.309417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.309650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.309677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.309855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.309892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.310039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.310067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.310292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.310322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.310540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.310570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.310831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.310858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.311083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.311112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.311303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.311332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.311545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.311575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.311773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.311800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.311980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.312007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.312210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.312237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.312472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.312502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.312710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.312737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.868 qpair failed and we were unable to recover it. 00:24:57.868 [2024-07-15 22:48:41.312939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-07-15 22:48:41.312966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.869 qpair failed and we were unable to recover it. 00:24:57.869 [2024-07-15 22:48:41.313155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-07-15 22:48:41.313186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.869 qpair failed and we were unable to recover it. 00:24:57.869 [2024-07-15 22:48:41.313353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-07-15 22:48:41.313383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.869 qpair failed and we were unable to recover it. 00:24:57.869 [2024-07-15 22:48:41.313552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-07-15 22:48:41.313580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.869 qpair failed and we were unable to recover it. 00:24:57.869 [2024-07-15 22:48:41.313807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-07-15 22:48:41.313838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.869 qpair failed and we were unable to recover it. 00:24:57.869 [2024-07-15 22:48:41.314079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-07-15 22:48:41.314107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.869 qpair failed and we were unable to recover it. 00:24:57.869 [2024-07-15 22:48:41.314309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-07-15 22:48:41.314339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.869 qpair failed and we were unable to recover it. 00:24:57.869 [2024-07-15 22:48:41.314543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-07-15 22:48:41.314570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.869 qpair failed and we were unable to recover it. 00:24:57.869 [2024-07-15 22:48:41.314753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-07-15 22:48:41.314783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.869 qpair failed and we were unable to recover it. 00:24:57.869 [2024-07-15 22:48:41.314975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-07-15 22:48:41.315005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:57.869 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.315173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.315206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.315405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.315433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.315596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.315626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.315847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.315884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.316133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.316169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.316345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.316373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.316545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.316576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.316749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.316780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.316948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.316979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.317148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.317175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.317371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.317401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.317568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.317598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.317818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.317848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.318051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.318078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.318277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.318307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.318513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.318540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.318713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.318740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.318898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.318930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.319102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.319129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.319332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.319362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.319564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.319594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.319783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.319810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.319965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.319992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.320171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.320198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.320393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.157 [2024-07-15 22:48:41.320423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.157 qpair failed and we were unable to recover it. 00:24:58.157 [2024-07-15 22:48:41.320632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.320659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.320856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.320893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.321051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.321081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.321298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.321327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.321498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.321525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.321703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.321729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.321910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.321941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.322101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.322131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.322331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.322357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.322576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.322606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.322773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.322803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.323002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.323030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.323233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.323260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.323436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.323466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.323629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.323658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.323819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.323849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.324059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.324086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.324253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.324284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.324501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.324531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.324740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.324767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.324916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.324943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.325142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.325172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.325375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.325402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.325577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.325604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.325754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.325781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.325958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.325988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.326182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.326211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.326369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.326400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.326623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.326650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.326823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.326853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.327066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.327096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.327286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.327316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.327510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.327541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.327733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.327763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.327956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.327986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.328182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.328212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.328403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.328430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.328651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.328681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.328885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.328913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.329074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.158 [2024-07-15 22:48:41.329104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.158 qpair failed and we were unable to recover it. 00:24:58.158 [2024-07-15 22:48:41.329309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.329336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.329539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.329568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.329766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.329796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.329960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.329990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.330188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.330215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.330434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.330464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.330655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.330685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.330852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.330888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.331066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.331093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.331256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.331285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.331447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.331477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.331673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.331703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.331907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.331934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.332107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.332134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.332365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.332394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.332585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.332615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.332818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.332845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.333015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.333046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.333255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.333285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.333485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.333530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.333737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.333765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.333925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.333953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.334101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.334145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.334330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.334361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.334561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.334588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.334788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.334817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.335005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.335036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.335257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.335286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.335465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.335491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.335635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.335662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.335864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.335907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.336128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.336158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.336350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.336381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.336549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.336580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.336796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.336826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.337016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.337046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.337243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.337270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.337492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.337522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.337707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.337737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.337928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.337958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.338132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.159 [2024-07-15 22:48:41.338159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.159 qpair failed and we were unable to recover it. 00:24:58.159 [2024-07-15 22:48:41.338333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.338361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.338566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.338593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.338785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.338814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.338995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.339023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.339213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.339243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.339474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.339504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.339727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.339756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.339955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.339982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.340155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.340185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.340404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.340434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.340825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.340900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.341104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.341131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.341356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.341385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.341720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.341768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.341954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.341981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.342161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.342187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.342388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.342415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.342626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.342655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.342819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.342854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.343030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.343057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.343280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.343309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.343674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.343734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.343947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.343974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.344152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.344179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.344355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.344384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.344582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.344611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.344805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.344831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.345035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.345063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.345264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.345293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.345566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.345615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.345844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.345874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.346056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.346082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.346307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.346336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.346715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.346770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.346973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.347000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.347177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.347203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.347427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.347456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.347848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.347909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.348082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.348110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.348286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.348312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.348515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.160 [2024-07-15 22:48:41.348544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.160 qpair failed and we were unable to recover it. 00:24:58.160 [2024-07-15 22:48:41.348737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.348766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.348983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.349010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.349165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.349192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.349390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.349419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.349724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.349787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.350010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.350037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.350212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.350238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.350442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.350470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.350628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.350657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.350856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.350894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.351073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.351099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.351295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.351324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.351514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.351543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.351736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.351766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.351962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.351989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.352187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.352217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.352384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.352414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.352602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.352632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.352825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.352852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.353054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.353085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.353254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.353283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.353450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.353480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.353701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.353727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.353920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.353950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.354123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.354150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.354340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.354369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.354554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.354581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.354733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.354762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.354949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.354978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.355197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.355227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.355421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.355448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.355658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.355687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.161 [2024-07-15 22:48:41.355852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.161 [2024-07-15 22:48:41.355886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.161 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.356102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.356131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.356321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.356348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.356553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.356582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.356763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.356792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.356986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.357016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.357182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.357209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.357435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.357464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.357691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.357720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.357909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.357940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.358145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.358173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.358401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.358430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.358632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.358661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.358896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.358927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.359109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.359135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.359330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.359360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.359552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.359581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.359808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.359837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.360090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.360117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.360320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.360349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.360539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.360568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.360727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.360756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.360924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.360951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.361150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.361179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.361346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.361375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.361538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.361567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.361783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.361809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.362000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.362030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.362219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.362248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.362413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.362442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.362612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.362638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.362858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.362892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.363117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.363143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.363332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.363361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.363585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.363611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.363782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.363811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.364006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.364035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.364194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.364223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.364417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.364443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.364661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.364691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.364882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.364911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.365083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.162 [2024-07-15 22:48:41.365112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.162 qpair failed and we were unable to recover it. 00:24:58.162 [2024-07-15 22:48:41.365314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.365340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.365537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.365565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.365737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.365767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.365960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.365990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.366151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.366178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.366402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.366431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.366599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.366628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.366791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.366821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.367016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.367043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.367240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.367269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.367460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.367489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.367647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.367676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.367853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.367887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.368044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.368071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.368265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.368294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.368485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.368514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.368701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.368728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.368994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.369024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.369222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.369251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.369442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.369471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.369695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.369721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.369964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.369994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.370202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.370245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.370410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.370439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.370618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.370645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.370840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.370869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.371087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.371116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.371274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.371303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.371479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.371506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.371696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.371725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.371939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.371969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.372165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.372192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.372391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.372417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.372639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.372668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.372845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.372874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.373099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.373129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.373346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.373373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.373543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.373572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.373733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.373763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.373953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.373988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.374213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.374240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.163 qpair failed and we were unable to recover it. 00:24:58.163 [2024-07-15 22:48:41.374397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.163 [2024-07-15 22:48:41.374427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.374654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.374680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.374844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.374870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.375075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.375101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.375274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.375304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.375496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.375526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.375746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.375775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.375994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.376021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.376184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.376214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.377478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.377516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.377736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.377767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.378004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.378031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.378244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.378273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.378465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.378494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.378686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.378716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.378913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.378940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.379106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.379136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.379317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.379346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.379563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.379594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.379797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.379824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.380026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.380056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.380251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.380280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.380495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.380525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.380727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.380755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.380926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.380956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.381163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.381192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.381382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.381412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.381602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.381628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.381853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.381890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.382093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.382120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.382346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.382375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.382541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.382568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.382723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.382750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.382971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.383001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.383197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.383226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.383427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.383453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.383653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.383679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.383888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.383918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.384112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.384150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.384316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.384358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.384584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.384610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.384813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-15 22:48:41.384840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-15 22:48:41.385066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.385095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.385298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.385324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.385519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.385548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.385769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.385798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.385996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.386025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.386201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.386238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.386428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.386457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.386662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.386691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.386896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.386925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.387147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.387174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.387369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.387399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.387607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.387634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.388604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.388638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.388882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.388910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.389116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.389142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.389321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.389347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.389531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.389562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.389756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.389783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.389974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.390004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.390202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.390229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.390448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.390489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.390665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.390691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.390834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.390859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.391085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.391116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.391292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.391326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.391500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.391527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.391714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.391740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.391928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.391958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.392185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.392214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.392437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.392474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.392642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.392671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.392863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.392904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.393122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.393152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.393375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.393401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.393648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.393674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.393887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.393931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-15 22:48:41.394098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-15 22:48:41.394129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.394355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.394382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.394569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.394598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.394816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.394846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.395030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.395060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.395827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.395860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.396093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.396124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.396322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.396352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.396547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.396577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.396811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.396838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.397024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.397053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.397262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.397292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.397479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.397518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.397733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.397759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.397938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.397969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.398166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.398202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.398405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.398435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.398615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.398641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.398836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.398881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.399046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.399076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.399262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.399291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.399478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.399505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.399725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.399754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.399949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.399977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.400154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.400191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.400435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.400461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.400670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.400700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.400897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.400924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.401097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.401124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.401339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.401369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.401545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.401574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.401765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.401796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.401959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.401990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.402212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.402245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.402458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.402485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.402660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.402687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.402837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.402864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.403055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.403082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.403282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.403309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.403538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.403567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.403749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-15 22:48:41.403780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-15 22:48:41.403973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.404003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.404198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.404227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.404435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.404461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.404654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.404683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.404882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.404909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.405089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.405119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.405357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.405383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.405545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.405570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.405743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.405769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.405916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.405942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.406090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.406115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.406291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.406320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.406514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.406539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.406740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.406769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.406961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.406987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.407163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.407198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.407400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.407425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.407627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.407655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.407884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.407909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.408087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.408115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.408304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.408329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.408526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.408554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.408777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.408803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.409018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.409047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.409241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.409266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.409468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.409496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.409691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.409717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.409922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.409951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.410115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.410140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.410300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.410328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.410521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.410546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.410693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.410718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.410894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.410919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.411071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.411097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.411268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.411300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.411456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.411482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.411673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.411699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.411888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.411915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.412060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.412086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.412252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.412296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.412574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.412641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-15 22:48:41.412830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-15 22:48:41.412858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.413095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.413121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.413318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.413343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.413555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.413584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.413757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.413800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.413986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.414013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.414187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.414213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.414434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.414483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.414762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.414789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.414957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.414983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.415163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.415188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.415392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.415422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.415701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.415761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.415968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.415994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.416187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.416241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.416596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.416657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.416861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.416892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.417095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.417121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.417358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.417406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.417605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.417634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.417852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.417888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.418084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.418110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.418323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.418351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.418723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.418770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.418985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.419012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.419778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.419810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.420007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.420034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.420210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.420236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.420408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.420451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.420649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.420677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.420869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.420903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.421090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.421116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.421304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.421338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.421530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.421575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.421777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.421803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.421981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.422008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.422148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.422174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.422343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.422388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.422528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.422553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.422728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.422754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.422932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.422958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.423128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-15 22:48:41.423157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-15 22:48:41.423348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.423381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.423736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.423795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.423983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.424011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.424181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.424210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.424408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.424434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.424616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.424642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.424820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.424846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.425036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.425061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.425335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.425386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.425699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.425760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.425967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.425993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.426142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.426178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.426463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.426492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.426708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.426736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.426913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.426940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.427116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.427145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.427373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.427401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.427695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.427753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.427942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.427967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.428137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.428165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.428377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.428422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.428691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.428740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.428931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.428957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.429178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.429207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.429471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.429517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.429751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.429777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.429932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.429957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.430151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.430177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.430355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.430381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.430563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.430589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.430740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.430766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.430955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.430982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.431133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.431174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.431403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.431432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.431647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.431675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.431873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.431922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.432075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.432101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.432271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.432296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.432491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.432519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.432717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.432745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.432905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.432948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-15 22:48:41.433107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-15 22:48:41.433136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.433339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.433367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.433561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.433588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.433774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.433809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.433992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.434018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.434189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.434214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.434431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.434458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.434651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.434690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.434855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.434907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.435085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.435110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.435313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.435341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.435584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.435631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.435832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.435858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.436037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.436062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.436258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.436304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.436581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.436627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.436824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.436849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.437027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.437053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.437232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.437261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.437482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.437510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.437686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4d0e0 is same with the state(5) to be set 00:24:58.170 [2024-07-15 22:48:41.437935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.437975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.438184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.438215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.438551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.438610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.438830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.438858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.439068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.439095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.439257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.439284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.439485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.439528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.439912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.439962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.440135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.440177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.440425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.440467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.441586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.441620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.441823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.441853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.442066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.442092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.442244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.442270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.442459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-15 22:48:41.442485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-15 22:48:41.442682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.442715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.442905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.442932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.443116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.443144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.443370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.443417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.443634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.443681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.443888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.443922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.444089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.444115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.444339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.444386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.444638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.444668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.444858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.444894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.445076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.445103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.445294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.445322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.445663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.445691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.445898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.445945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.446101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.446126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.446358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.446386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.446614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.446643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.446867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.446908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.447078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.447103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.447285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.447320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.447659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.447711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.447929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.447956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.448115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.448158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.448382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.448412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.448665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.448694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.448900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.448928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.449106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.449132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.449334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.449363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.449536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.449562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.449718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.449744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.449935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.449961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.450184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.450245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.450482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.450530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.450704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.450730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.450905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.450962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.451194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.451250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.451517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.451549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.451760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.451787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.451994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.452025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.452268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.452296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.452466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-15 22:48:41.452493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-15 22:48:41.452661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.452687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.452872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.452904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.453139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.453179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.453406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.453435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.453695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.453735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.453946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.453973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.454151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.454194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.454407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.454450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.454694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.454742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.454988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.455018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.455209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.455240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.455535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.455585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.455765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.455791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.455993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.456020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.456226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.456255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.456469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.456503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.456684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.456712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.456932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.456985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.457187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.457217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.457457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.457494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.457665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.457690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.457868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.457900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.458076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.458106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.458367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.458431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.458671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.458700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.458923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.458951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.459130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.459160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.459341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.459370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.459613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.459660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.459838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.459865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.460040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.460067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.460249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.460288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.460589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.460640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.460858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.460889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.461036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.461063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.461265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.461294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.461510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.461539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.461737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.461764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.461962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.461988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.462158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.462188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.462438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-15 22:48:41.462468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-15 22:48:41.462853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.462948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.463128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.463157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.463381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.463410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.463665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.463699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.463909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.463936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.464145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.464175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.464421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.464450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.464709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.464755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.464962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.464993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.465187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.465230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.465480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.465509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.465697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.465724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.465945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.465975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.466220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.466249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.466453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.466506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.466682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.466709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.466888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.466918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.467119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.467149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.467363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.467392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.467557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.467583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.467787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.467814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.467988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.468018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.468204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.468233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.468468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.468497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.468702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.468728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.468923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.468953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.469146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.469175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.469424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.469453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.469614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.469641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.469843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.469886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.470047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.470074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.470261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.470286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.470484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.470510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.470686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.470715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.470887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.470917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.471108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.471134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.471305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.471333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.471503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.471533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.471727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.471755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.471943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-15 22:48:41.471970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-15 22:48:41.472149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.472174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.472371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.472399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.472593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.472641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.472816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.472850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.473101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.473141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.473317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.473361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.473533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.473576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.473771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.473815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.473987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.474013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.474188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.474230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.474430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.474474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.474693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.474740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.474894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.474930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.475105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.475153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.475340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.475368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.475606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.475657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.475844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.475885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.476047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.476072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.476254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.476283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.476480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.476524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.476727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.476771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.477008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.477067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.477243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.477273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.477469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.477518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.477756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.477803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.478000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.478027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.478166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.478192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.478396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.478422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.478571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.478613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.478781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.478810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.478994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.479021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.479208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.479236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.479412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.479454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.479675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.479721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.479901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.479927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-15 22:48:41.480074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-15 22:48:41.480099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.480301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.480330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.480533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.480579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.480748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.480776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.480983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.481009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.481153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.481178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.481360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.481388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.481622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.481669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.481900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.481926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.482086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.482111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.482291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.482319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.482545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.482588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.482825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.482852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.483034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.483060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.483215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.483241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.483414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.483461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.483708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.483736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.483952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.483978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.484125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.484150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.484392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.484425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.484673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.484718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.484908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.484952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.485102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.485131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.485316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.485342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.485547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.485575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.485765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.485793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.486006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.486032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.486205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.486233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.486396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.486424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.486611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.486639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.486797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.486825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.487027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.487053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.487210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.487235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.487427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.487474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.487666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.487694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.487933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.487959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.488107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.488132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.488339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.488368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.488533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.488561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.488747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.488776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.488996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.489022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.489184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-15 22:48:41.489210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-15 22:48:41.489423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.489470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.489697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.489726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.489916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.489959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.490116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.490142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.490325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.490353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.490564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.490591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.490783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.490811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.491002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.491028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.491186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.491211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.491396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.491424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.491642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.491671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.491839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.491874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.492064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.492090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.492257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.492282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.492531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.492578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.492738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.492766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.492978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.493004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.493163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.493189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.493389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.493424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.493621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.493649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.493836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.493864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.494076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.494105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.494305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.494333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.494582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.494628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.494818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.494847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.495033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.495058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.495208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.495233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.495425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.495453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.495669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.495697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.495936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.495961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.496111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.496137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.496365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.496393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.496610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.496657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.496849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.496884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.497059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.497085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.497249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.497275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.497446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.497474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.497662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.497690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.497853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.497903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.498052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.498077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.498231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.498273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-15 22:48:41.498518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-15 22:48:41.498546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.498705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.498733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.498932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.498959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.499134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.499160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.499357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.499386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.499581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.499609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.499806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.499835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.500028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.500057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.500269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.500298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.500485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.500531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.500696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.500724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.500931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.500957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.501155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.501181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.501372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.501400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.501580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.501608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.501800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.501828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.501995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.502021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.502184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.502212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.502419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.502445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.502669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.502705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.502888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.502932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.503115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.503140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.503328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.503354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.503583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.503623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.503837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.503883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.504058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.504085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.504277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.504306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.504555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.504600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.504791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.504820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.505008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.505034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.505213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.505255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.505441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.505469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.505657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.505685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.505864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.505909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.506065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.506090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.506272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.506300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.506519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.506547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.506736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.506764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.506976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.507003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.507169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.507194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.507414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.507442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.507640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.507685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-15 22:48:41.507858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-15 22:48:41.507899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.508077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.508103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.508299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.508327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.508510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.508557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.508743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.508771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.508970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.508995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.509167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.509203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.509406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.509434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.509619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.509647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.509853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.509886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.510081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.510106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.510319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.510347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.510573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.510619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.510835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.510863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.511044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.511069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.511223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.511249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.511445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.511473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.511638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.511666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.511904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.511947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.512116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.512157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.512318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.512346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.512531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.512578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.512821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.512849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.513075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.513101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.513274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.513299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.513530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.513570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.513752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.513780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.513961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.513987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.514136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.514180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.514387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.514415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.514598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.514645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.514831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.514860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.515072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.515097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.515266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.515295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.515496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.515524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.515709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-15 22:48:41.515737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-15 22:48:41.515929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.515955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.516172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.516200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.516392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.516420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.516609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.516637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.516850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.516884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.517081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.517107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.517285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.517310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.517526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.517555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.517734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.517762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.517938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.517963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.518114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.518139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.518343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.518371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.518583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.518609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.518806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.518834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.519035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.519061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.519217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.519243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.519424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.519450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.519618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.519646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.519813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.519839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.520062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.520091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.520281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.520309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.520500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.520525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.520725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.520753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.520967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.520997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.521160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.521186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.521420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.521449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.521670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.521698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.521861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.521891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.522115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.522143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.522335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.522363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.522570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.522596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.522789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.522816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.523016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.523047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.523275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.523300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.523523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.523551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.523744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.523773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.523967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.523993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.524194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.524222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.524414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.524447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.524621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.524646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.524836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.524864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-15 22:48:41.525062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-15 22:48:41.525088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.525285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.525311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.525505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.525533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.525726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.525755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.525927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.525954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.526099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.526124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.526310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.526336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.526489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.526514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.526738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.526766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.526957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.526986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.527157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.527182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.527358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.527386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.527580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.527608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.527809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.527834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.528037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.528066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.528257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.528286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.528477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.528503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.528691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.528719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.528884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.528913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.529080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.529105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.529272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.529297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.529490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.529518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.529724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.529749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.529944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.529974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.530192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.530224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.530389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.530414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.530602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.530630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.530797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.530824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.531025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.531050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.531278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.531306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.531522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.531550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.531738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.531763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.531950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.531976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.532161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.532187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.532362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.532388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.532566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.532591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.532737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.532762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.532937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.532963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.533159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.533196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.533356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.533385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.533551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.533577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.533735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-15 22:48:41.533760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-15 22:48:41.533943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.533970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.534146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.534171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.534343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.534372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.534532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.534560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.534752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.534777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.534971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.535000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.535189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.535217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.535383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.535409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.535632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.535661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.535830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.535858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.536099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.536125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.536342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.536370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.536573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.536601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.536803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.536828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.536992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.537018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.537195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.537220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.537397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.537422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.537638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.537666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.537870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.537906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.538096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.538123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.538327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.538355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.538542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.538570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.538756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.538784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.538958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.538989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.539136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.539162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.539371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.539399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.539584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.539612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.539802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.539830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.540047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.540073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.540300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.540329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.540570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.540617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.540815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.540841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.540991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.541016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.541156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.541209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.541414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.541440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.541664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.541692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.541902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.541945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.542125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.542152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.542355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.542384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.542584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.542636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.542835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.542860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-15 22:48:41.543043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-15 22:48:41.543068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.543268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.543296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.543519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.543545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.543710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.543738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.543943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.543969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.544135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.544160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.544360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.544388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.544614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.544640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.544838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.544863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.545021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.545046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.545264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.545292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.545455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.545480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.545621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.545662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.545856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.545889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.546081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.546106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.546326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.546354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.546578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.546604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.546744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.546769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.546927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.546953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.547096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.547121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.547326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.547351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.547546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.547574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.547727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.547755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.547953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.547979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.548201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.548229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.548418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.548446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.548666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.548692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.548862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.548895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.549084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.549109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.549264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.549289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.549486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.549514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.549683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.549712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.549915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.549941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.550089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.550114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.550309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.550337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.550510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.550535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.550718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.550746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.550920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.550963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.551139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.551165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.551342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.551370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.551614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.551660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-15 22:48:41.551855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-15 22:48:41.551887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.552063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.552088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.552253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.552283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.552471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.552496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.552718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.552747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.552989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.553015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.553186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.553211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.553408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.553437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.553636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.553683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.553886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.553915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.554121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.554146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.554354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.554383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.554570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.554595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.554785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.554813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.555023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.555049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.555247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.555273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.555463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.555492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.555742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.555788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.556010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.556036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.556202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.556230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.556425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.556453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.556621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.556646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.556841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.556869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.557073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.557098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.557242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.557267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.557422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.557448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.557647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.557675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.557881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.557907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.558081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.558106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.558344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.558369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.558519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.558544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.558739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.558785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.559025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.559052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.559222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.559247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.559449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.559477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-15 22:48:41.559625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-15 22:48:41.559654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.559870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.559901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.560106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.560134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.560299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.560327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.560493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.560518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.560695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.560720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.560900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.560939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.561141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.561167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.561362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.561390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.561585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.561613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.561781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.561807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.562022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.562051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.562246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.562274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.562493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.562518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.562716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.562744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.562959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.562992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.563169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.563195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.563386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.563415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.563640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.563667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.563893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.563919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.564110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.564138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.564333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.564361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.564583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.564608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.564805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.564832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.565061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.565089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.565262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.565287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.565456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.565481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.565672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.565700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.565870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.565913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.566092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.566118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.566319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.566348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.566564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.566589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.566785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.566815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.567024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.567050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.567218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.567243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.567435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.567463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.567652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.567680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.567882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.567908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.568102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.568130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.568296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.568324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.568520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.568545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.568740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-15 22:48:41.568785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-15 22:48:41.568986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.569019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.569189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.569214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.569370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.569395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.569591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.569619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.569806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.569831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.570048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.570076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.570229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.570257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.570454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.570479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.570678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.570706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.570881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.570910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.571083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.571109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.571315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.571343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.571560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.571588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.571761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.571787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.571983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.572012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.572205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.572233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.572427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.572452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.572641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.572669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.572857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.572892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.573089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.573114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.573295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.573323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.573515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.573542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.573764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.573790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.573960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.573985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.574183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.574211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.574428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.574454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.574677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.574706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.574931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.574957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.575143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.575168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.575338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.575366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.575534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.575562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.575787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.575812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.576007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.576035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.576197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.576226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.576422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.576447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.576669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.576697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.576887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.576916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.577149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.577174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.577376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.577404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.577619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.577647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.577840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.577865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-15 22:48:41.578108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-15 22:48:41.578137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.578292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.578317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.578516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.578541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.578739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.578767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.578962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.578991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.579188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.579213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.579413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.579441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.579610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.579639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.579841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.579866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.580038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.580066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.580284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.580313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.580479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.580505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.580694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.580722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.580892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.580921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.581113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.581139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.581293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.581318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.581509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.581537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.581759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.581785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.581980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.582008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.582191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.582219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.582413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.582438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.582633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.582661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.582813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.582841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.583043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.583069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.583236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.583265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.583420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.583448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.583667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.583692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.583867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.583906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.584096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.584124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.584290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.584316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.584538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.584566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.584791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.584820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.584995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.585021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.585207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.585235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.585417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.585446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.585611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.585636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.585802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.585830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.586003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.586031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.586186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.586212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.586366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.586409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.586626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.586653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-15 22:48:41.586907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-15 22:48:41.586950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.587103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.587129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.587328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.587356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.587526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.587552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.587741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.587769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.587946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.587971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.588118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.588145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.588340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.588369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.588550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.588578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.588770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.588795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.588994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.589023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.589185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.589213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.589385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.589410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.589584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.589609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.589780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.589808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.589975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.590001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.590167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.590192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.590399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.590424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.590566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.590591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.590783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.590811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.591024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.591054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.591247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.591272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.591460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.591487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.591648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.591677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.591898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.591924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.592090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.592118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.592306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.592334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.592503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.592533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.592749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.592777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.592931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.592960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.593126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.593151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.593335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.593363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.593587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.593615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.593786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.593810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.594005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.594033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.594187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.594215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-15 22:48:41.594404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-15 22:48:41.594430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.594620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.594648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.594866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.594900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.595114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.595140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.595305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.595333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.595554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.595583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.595748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.595776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.595996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.596022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.596244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.596273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.596433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.596458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.596605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.596647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.596835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.596863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.597062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.597089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.597241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.597270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.597483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.597511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.597676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.597702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.597854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.597891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.598032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.598074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.598265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.598294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.598447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.598472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.598617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.598642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.598847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.598872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.599069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.599097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.599286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.599314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.599539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.599565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.599765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.599793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.599997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.600026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.600203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.600228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.600388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.600417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.600633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.600661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.600863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.600893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.601071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.601096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.601304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.601334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.601526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.601551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.601727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.601752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.601957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.602000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.602207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.602233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.602426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.602454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.602611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.602639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.602836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.602861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.603037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.603063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.603240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-15 22:48:41.603268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-15 22:48:41.603464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.603490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.603667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.603692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.603885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.603914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.604108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.604133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.604296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.604324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.604514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.604542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.604703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.604728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.604921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.604951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.605139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.605167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.605359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.605384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.605536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.605561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.605764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.605792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.606019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.606045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.606221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.606249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.606439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.606467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.606667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.606692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.606869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.606899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.607074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.607102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.607250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.607275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.607440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.607469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.607657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.607685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.607879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.607905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.608055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.608080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.608285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.608310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.608497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.608522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.608714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.608743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.608957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.608985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.609172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.609197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.609415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.609442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.609625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.609653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.609826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.609852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.610069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.610098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.610263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.610291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.610478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.610504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.610751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.610801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.610990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.611019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.611220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.611246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.611421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.611449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.611609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.611638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.611800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.611825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.611991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.612019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-15 22:48:41.612218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-15 22:48:41.612246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.612435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.612459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.612612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.612637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.612780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.612806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.612946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.612972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.613166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.613194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.613365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.613393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.613553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.613579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.613771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.613799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.613989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.614015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.614191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.614217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.614435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.614463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.614653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.614678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.614820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.614847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.615044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.615073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.615287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.615315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.615487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.615512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.615703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.615731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.615895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.615923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.616106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.616132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.616295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.616323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.616548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.616576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.616770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.616796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.616993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.617022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.617244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.617269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.617442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.617467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.617603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.617628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.617795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.617837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.618042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.618068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.618282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.618311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.618527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.618555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.618748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.618773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.618972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.619002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.619170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.619198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.619397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.619422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.619640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.619668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.619825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.619853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.620053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.620079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.620259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.620284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.620460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.620488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.620656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.620682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.620883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.620912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-15 22:48:41.621066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-15 22:48:41.621094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.621331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.621356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.621525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.621557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.621749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.621777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.621976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.622002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.622175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.622204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.622402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.622430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.622604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.622630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.622828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.622856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.623046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.623074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.623295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.623320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.623494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.623522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.623707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.623736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.623901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.623927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.624103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.624128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.624281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.624307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.624480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.624506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.624697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.624725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.624886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.624929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.625106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.625131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.625295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.625325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.625532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.625560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.625745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.625770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.625962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.625991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.626155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.626183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.626380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.626406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.626594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.626622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.626782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.626811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.627010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.627036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.627230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.627258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.627453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.627483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.627680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.627705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.627885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.627911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.628089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.628114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.628258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-15 22:48:41.628283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-15 22:48:41.628448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.628490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.628682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.628711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.628914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.628940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.629108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.629137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.629282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.629310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.629513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.629539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.629727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.629755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.629946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.629975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.630169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.630195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.630377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.630402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.630568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.630594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.630764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.630789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.630983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.631012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.631196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.631223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.631444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.631469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.631703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.631728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.631931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.631957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.632140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.632165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.632343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.632367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.632586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.632614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.632806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.632832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.633016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.633044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.633263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.633291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.633487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.633512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.633678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.633707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.633900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.633939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.634136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.634162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.634359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.634387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.634550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.634578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.634771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.634797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.634975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.635001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.635198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.635223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.635440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.635466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.635672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.635700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.635890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.635919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.636138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.636166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.636333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.636361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.636548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.636576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.636778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.636803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.636980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.637009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.637199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.637227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-15 22:48:41.637420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-15 22:48:41.637445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-15 22:48:41.637642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-15 22:48:41.637670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.637862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.637899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.638066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.638092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.638313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.638341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.638545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.638570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.638740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.638765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.638955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.638984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.639157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.639184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.639376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.639401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.639557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.639583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.639756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.639781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.639956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.639982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.640173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.475 [2024-07-15 22:48:41.640202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.475 qpair failed and we were unable to recover it. 00:24:58.475 [2024-07-15 22:48:41.640405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.640434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.640625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.640651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.640841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.640869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.641100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.641128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.641295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.641321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.641511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.641539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.641710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.641738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.641911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.641937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.642100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.642128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.642289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.642317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.642510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.642535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.642693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.642721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.642940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.642969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.643157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.643183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.643376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.643404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.643558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.643586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.643753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.643778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.643997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.644025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.644189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.644218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.644410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.644435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.644626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.644654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.644807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.644839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.645040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.645065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.645262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.645290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.645474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.645503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.645671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.645696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.645835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.645860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.646044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.646072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.646268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.646293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.646480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.646508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.646707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.646732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.646933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.646960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.647161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.647189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.647344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.647372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.647543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.647568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.647766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.647795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.647953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.647982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.648148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.648174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.648344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.648372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.648600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.648628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.648785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.648810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.476 qpair failed and we were unable to recover it. 00:24:58.476 [2024-07-15 22:48:41.649008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.476 [2024-07-15 22:48:41.649036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.649227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.649256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.649441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.649467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.649665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.649693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.649900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.649929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.650125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.650151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.650298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.650324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.650510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.650542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.650710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.650736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.650913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.650942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.651135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.651163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.651351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.651377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.651582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.651610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.651807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.651832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.652010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.652036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.652233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.652261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.652452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.652482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.652672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.652698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.652863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.652897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.653067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.653096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.653265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.653290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.653519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.653548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.653712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.653740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.653912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.653938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.654133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.654162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.654377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.654405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.654630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.654656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.654850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.654885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.655103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.655131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.655291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.655316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.655479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.655507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.655698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.655726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.655929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.655955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.656149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.656177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.656368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.656393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.656595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.656621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.656794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.656822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.657017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.657043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.657212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.657237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.657456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.657485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.657657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.477 [2024-07-15 22:48:41.657685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.477 qpair failed and we were unable to recover it. 00:24:58.477 [2024-07-15 22:48:41.657896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.657922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.658117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.658146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.658332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.658360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.658557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.658582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.658774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.658802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.658993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.659023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.659222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.659248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.659448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.659481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.659663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.659692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.659916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.659942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.660144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.660174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.660391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.660420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.660614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.660639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.660857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.660890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.661060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.661088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.661287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.661312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.661505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.661533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.661703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.661732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.661938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.661964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.662178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.662207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.662394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.662422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.662622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.662647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.662845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.662874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.663098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.663124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.663278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.663303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.663498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.663524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.663723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.663751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.663968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.663994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.664196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.664224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.664412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.664441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.664638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.664663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.664859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.664892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.665051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.665079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.665257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.665282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.665434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.665462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.665607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.665649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.665817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.665842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.666033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.666059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.666230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.666259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.666461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.666486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.666653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.478 [2024-07-15 22:48:41.666682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.478 qpair failed and we were unable to recover it. 00:24:58.478 [2024-07-15 22:48:41.666883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.666912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.667085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.667111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.667281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.667324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.667484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.667513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.667707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.667733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.667955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.667984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.668149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.668177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.668347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.668373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.668565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.668593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.668785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.668813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.668980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.669006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.669194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.669222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.669449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.669477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.669648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.669674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.669822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.669848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.670037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.670062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.670257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.670282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.670452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.670481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.670669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.670695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.670897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.670923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.671110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.671138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.671338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.671366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.671560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.671586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.671783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.671812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.671985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.672014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.672187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.672212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.672373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.672401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.672627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.672655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.672816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.672840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.673043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.673072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.673229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.673257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.673450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.673476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.673674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.673702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.673904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.673932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.674101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.674131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.674298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.674323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.479 qpair failed and we were unable to recover it. 00:24:58.479 [2024-07-15 22:48:41.674547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.479 [2024-07-15 22:48:41.674575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.674783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.674808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.675012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.675041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.675245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.675270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.675442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.675467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.675696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.675724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.675931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.675961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.676164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.676189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.676392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.676420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.676590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.676618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.676795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.676821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.677049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.677077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.677246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.677274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.677445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.677471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.677688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.677716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.677884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.677913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.678111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.678136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.678305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.678333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.678557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.678582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.678740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.678765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.678943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.678969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.679182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.679207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.679418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.679443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.679643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.679671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.679899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.679927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.680128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.680153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.680351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.680380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.680578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.680603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.680799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.680824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.681002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.681032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.681225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.681254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.681427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.681452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.681617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.681646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.681805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.681833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.682043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.682069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.682297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.682326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.682516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.682544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.682739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.682764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.480 qpair failed and we were unable to recover it. 00:24:58.480 [2024-07-15 22:48:41.682960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.480 [2024-07-15 22:48:41.682990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.683151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.683179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.683351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.683378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.683575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.683604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.683832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.683860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.684071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.684097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.684326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.684354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.684550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.684578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.684752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.684778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.684934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.684961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.685158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.685186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.685405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.685430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.685643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.685672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.685867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.685902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.686101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.686126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.686307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.686335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.686538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.686567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.686758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.686784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.686959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.686985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.687158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.687183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.687358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.687383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.687545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.687573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.687791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.687819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.688027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.688053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.688223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.688249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.688446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.688474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.688635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.688662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.688854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.688891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.689047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.689079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.689280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.689305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.689526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.689554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.689748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.689776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.689977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.690003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.690226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.690254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.690444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.690472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.690679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.690704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.690864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.690908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.691094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.691122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.691338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.691363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.691555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.691583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.691778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.691806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.481 [2024-07-15 22:48:41.691973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.481 [2024-07-15 22:48:41.691999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.481 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.692186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.692211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.692435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.692463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.692654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.692679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.692871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.692912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.693109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.693137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.693363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.693388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.693617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.693645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.693905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.693957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.694140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.694165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.694347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.694375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.694564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.694593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.694771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.694796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.694983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.695012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.695179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.695208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.695416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.695442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.695615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.695640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.695852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.695897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.696096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.696121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.696290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.696318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.696487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.696516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.696687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.696712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.696909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.696938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.697130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.697159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.697351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.697376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.697607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.697635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.697796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.697823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.698044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.698070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.698256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.698284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.698447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.698475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.698671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.698697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.698867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.698900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.699115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.699143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.699341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.699366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.699576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.699605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.699794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.699822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.700048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.700074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.700303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.700331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.700497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.700524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.700726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.700754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.700925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.700951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.701125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.482 [2024-07-15 22:48:41.701166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.482 qpair failed and we were unable to recover it. 00:24:58.482 [2024-07-15 22:48:41.701359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.701384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.701578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.701606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.701796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.701824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.702018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.702045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.702197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.702225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.702382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.702410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.702599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.702624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.702783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.702809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.702959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.702985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.703131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.703156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.703345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.703373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.703573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.703597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.703751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.703778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.703962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.703996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.704199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.704228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.704397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.704422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.704617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.704645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.704859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.704893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.705068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.705094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.705290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.705318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.705497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.705525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.705746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.705771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.705938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.705967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.706168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.706196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.706373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.706398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.706547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.706572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.706741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.706767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.706943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.706969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.707167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.707195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.707428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.707453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.707653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.707678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.707881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.707910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.708109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.708134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.708317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.708342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.708544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.708572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.708753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.708781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.708983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.709008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.709210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.709237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.709410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.709438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.709634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.709660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.709851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.709895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.483 [2024-07-15 22:48:41.710125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.483 [2024-07-15 22:48:41.710151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.483 qpair failed and we were unable to recover it. 00:24:58.484 [2024-07-15 22:48:41.710323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.484 [2024-07-15 22:48:41.710348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.484 qpair failed and we were unable to recover it. 00:24:58.484 [2024-07-15 22:48:41.710530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.484 [2024-07-15 22:48:41.710558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.484 qpair failed and we were unable to recover it. 00:24:58.484 [2024-07-15 22:48:41.710751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.484 [2024-07-15 22:48:41.710779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.484 qpair failed and we were unable to recover it. 00:24:58.484 [2024-07-15 22:48:41.711000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.484 [2024-07-15 22:48:41.711027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.484 qpair failed and we were unable to recover it. 00:24:58.484 [2024-07-15 22:48:41.711190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.484 [2024-07-15 22:48:41.711217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.484 qpair failed and we were unable to recover it. 00:24:58.484 [2024-07-15 22:48:41.711413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.484 [2024-07-15 22:48:41.711441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.484 qpair failed and we were unable to recover it. 00:24:58.484 [2024-07-15 22:48:41.711603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.484 [2024-07-15 22:48:41.711628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.484 qpair failed and we were unable to recover it. 00:24:58.484 [2024-07-15 22:48:41.711820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.484 [2024-07-15 22:48:41.711848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.484 qpair failed and we were unable to recover it. 00:24:58.484 [2024-07-15 22:48:41.712032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.484 [2024-07-15 22:48:41.712057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.484 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.712216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.712241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.712432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.712460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.712619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.712647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.712837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.712866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.713053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.713082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.713248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.713276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.713445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.713470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.713664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.713694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.713858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.713908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.714081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.714106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.714297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.714324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.714488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.714517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.714710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.714735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.714935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.714964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.715229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.715257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.715455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.715480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.715672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.715700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.715882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.715911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.716126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.716152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.716348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.716378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.716598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.716626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.716832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.716857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.717062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.717090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.717275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.717304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.717506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.485 [2024-07-15 22:48:41.717532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.485 qpair failed and we were unable to recover it. 00:24:58.485 [2024-07-15 22:48:41.717830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.717890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.718098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.718123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.718298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.718323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.718504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.718529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.718699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.718724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.718900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.718931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.719110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.719139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.719331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.719359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.719556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.719581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.719750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.719778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.719973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.719999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.720165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.720190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.720340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.720365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.720506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.720531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.720709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.720734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.720909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.720938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.721127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.721155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.721355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.721382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.721539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.721567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.721738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.721767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.721963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.721989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.722182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.722211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.722365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.722393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.722591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.722616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.722767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.722793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.723012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.723041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.723300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.723325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.723516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.723544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.723708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.723736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.723928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.723954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.724146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.724174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.724388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.724416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.724597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.724622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.724825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.724853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.725032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.725058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.725207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.725232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.725391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.725416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.725613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.725641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.725863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.725894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.726070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.726098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.726262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.726290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-15 22:48:41.726490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-15 22:48:41.726515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.726736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.726763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.726954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.726984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.727174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.727199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.727405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.727433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.727585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.727618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.727816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.727841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.728050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.728079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.728262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.728290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.728473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.728498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.728687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.728715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.728906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.728935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.729102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.729127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.729273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.729299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.729490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.729518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.729689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.729714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.729863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.729895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.730098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.730126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.730343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.730368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.730528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.730556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.730775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.730803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.731008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.731033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.731185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.731211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.731377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.731402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.731581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.731606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.731834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.731862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.732098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.732126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.732319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-15 22:48:41.732344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-15 22:48:41.732535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.732564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.732788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.732815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.733033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.733059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.733263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.733292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.733507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.733540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.733740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.733765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.734002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.734031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.734222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.734251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.734446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.734471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.734663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.734691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.734872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.734906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.735102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.735127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.735327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.735355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.735517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.735545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.735728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.735754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.735951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.735980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.736180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.736205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.736402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.736427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.736597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.736626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.736812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.736840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.737039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.737065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.737289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.737317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.737477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.737506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.737675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.737701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.737896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.737926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.738109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.738138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.738337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.738363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.738525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.738554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.738744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.738772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.738940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.738966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.739191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.739219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.739435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.739463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.739664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.739690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.739899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.739928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.740102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.740128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.740303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.740328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.740560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.740585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.740753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.740779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.740954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.740980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.741151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.741178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-15 22:48:41.741370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-15 22:48:41.741398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.741592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.741617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.741807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.741835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.742057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.742083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.742260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.742285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.742472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.742507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.742668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.742696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.742888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.742914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.743087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.743116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.743308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.743336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.743515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.743540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.743689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.743715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.743917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.743946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.744130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.744155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.744350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.744380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.744554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.744583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.744785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.744811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.744967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.744993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.745183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.745211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.745400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.745425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.745626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.745654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.745881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.745910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.746083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.746108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.746308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.746337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.746530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.746558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.746777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.746802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.747068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.747096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.747310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.747339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.747532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.747558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.747781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.747809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.748030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.748056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.748255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.748281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.748455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.748488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.748645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.748673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.748898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.748924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.749096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.749124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.749342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.749370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.749585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.749611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.749809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.749837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.750035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.750060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.750206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.750231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-15 22:48:41.750481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-15 22:48:41.750507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.750721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.750750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.750923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.750948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.751156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.751184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.751369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.751397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.751621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.751646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.751807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.751835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.752055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.752080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.752227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.752252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.752442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.752470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.752668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.752696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.752868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.752900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.753077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.753103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.753263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.753291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.753488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.753513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.753703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.753731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.753917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.753946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.754201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.754227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.754458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.754486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.754712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.754738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.754907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.754933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.755078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.755104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.755324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.755352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.755572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.755597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.755764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.755796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.755966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.755995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.756193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.756219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.756418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.756446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.756633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.756661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.756831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.756857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.757055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.757083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.757280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.757308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.757503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.757533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.757749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.757778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.758045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.758074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.758236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.758261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.758482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.758510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.758696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.758724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.758924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.758949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.759124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.759149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.759371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.759400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.759607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.759632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-15 22:48:41.759852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-15 22:48:41.759897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.760059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.760089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.760282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.760309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.760467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.760496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.760693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.760722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.760976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.761002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.761147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.761201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.761364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.761391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.761561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.761587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.761771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.761800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.761987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.762014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.762191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.762217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.762388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.762416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.762572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.762600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.762791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.762816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.762999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.763029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.763253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.763281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.763471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.763496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.763665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.763697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.763924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.763950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.764101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.764127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.764306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.764335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.764495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.764523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.764746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.764772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.764965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.764995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.765154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.765191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.765389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.765414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.765609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.765638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.765864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.765901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.766080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.766106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.766305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.766334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.766532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.766560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.766732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.766758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.766984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.767014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.767175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.767203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.767458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.767484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.767653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.767681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.767897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.767927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.768095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.768121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.768291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.768319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.768514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.768542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.768720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-15 22:48:41.768745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-15 22:48:41.768938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.768976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.769135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.769165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.769373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.769398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.769581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.769609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.769842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.769867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.770053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.770078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.770267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.770293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.770498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.770524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.770690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.770715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.770948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.770978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.771181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.771206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.771372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.771397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.771549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.771575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.771767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.771795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.771998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.772025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.772293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.772321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.772532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.772564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.772724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.772749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.772898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.772942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.773096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.773124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.773302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.773327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.773523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.773548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.773759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.773787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.773997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.774023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.774223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.774252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.774413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.774441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.774648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.774674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.774872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.774907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.775100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.775128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.775286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.775312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.775505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.775533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.775727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.775755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.775978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.776005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.776168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.776204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.776373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-15 22:48:41.776402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-15 22:48:41.776622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.776648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.776843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.776872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.777101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.777129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.777360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.777386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.777602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.777628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.777825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.777850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.778030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.778056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.778252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.778278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.778448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.778477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.778737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.778762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.778970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.778999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.779182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.779210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.779370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.779396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.779559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.779587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.779775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.779804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.780024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.780050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.780243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.780272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.780434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.780462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.780661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.780687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.780887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.780916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.781081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.781109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.781289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.781314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.781495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.781528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.781856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.781915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.782119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.782145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.782343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.782372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.782590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.782619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.782794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.782819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.783026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.783055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.783274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.783299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.783504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.783530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.783732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.783760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.783923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.783953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.784145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.784172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.784328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.784354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.784541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.784569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.784772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.784798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.784948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.784974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.785138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.785178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.785380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.785405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.785629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.785658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-15 22:48:41.785846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-15 22:48:41.785874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.786046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.786072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.786346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.786374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.786567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.786595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.786787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.786813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.787025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.787054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.787215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.787243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.787432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.787458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.787603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.787635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.787788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.787831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.788070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.788097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.788308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.788336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.788560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.788588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.788786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.788811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.788996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.789025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.789207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.789235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.789428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.789454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.789711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.789739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.789933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.789962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.790157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.790186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.790386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.790414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.790572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.790600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.790796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.790822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.791030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.791059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.791252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.791280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.791507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.791531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.791732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.791759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.791979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.792007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.792206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.792230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.792413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.792437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.792646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.792673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.792862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.792898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.793098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.793125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.793348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.793375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.793569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.793595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.793832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.793860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.794087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.794113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.794295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.794321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.794495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.794523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.794726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.794752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.794951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.794977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.795151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-15 22:48:41.795179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-15 22:48:41.795371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.795399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.795593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.795618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.795810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.795837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.796042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.796070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.796260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.796285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.796459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.796484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.796716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.796744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.797001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.797031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.797230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.797257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.797407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.797435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.797633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.797658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.797808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.797834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.798037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.798066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.798265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.798290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.798478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.798506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.798671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.798699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.798893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.798919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.799108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.799136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.799291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.799319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.799537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.799565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.799757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.799786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.799998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.800023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.800189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.800214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.800387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.800415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.800631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.800659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.800823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.800848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.801038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.801067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.801255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.801283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.801472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.801497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.801757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.801785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.802009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.802035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.802206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.802231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.802400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.802428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.802631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.802656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.802831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.802860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.803069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.803097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.803288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.803316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.803488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.803514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.803747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.803796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.803993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.804022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.804224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.804250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-15 22:48:41.804423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-15 22:48:41.804448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.804679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.804707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.804905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.804931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.805110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.805139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.805326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.805351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.805538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.805563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.805734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.805761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.805993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.806023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.806214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.806240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.806432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.806460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.806652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.806680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.806866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.806897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.807090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.807118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.807325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.807353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.807535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.807560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.807790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.807818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.807997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.808026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.808256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.808281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.808476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.808505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.808770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.808798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.808998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.809024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.809215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.809241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.809411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.809439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.809642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.809667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.809898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.809927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.810167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.810192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.810395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.810420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.810593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.810621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.810812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.810840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.811037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.811063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.811237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.811265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.811450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.811478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.811736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.811762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.811959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.811988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.812181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.812214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-15 22:48:41.812452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-15 22:48:41.812477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.812744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.812773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.812983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.813013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.813196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.813221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.813415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.813443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.813635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.813664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.813866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.813897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.814073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.814098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.814311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.814340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.814548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.814573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.814835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.814863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.815065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.815093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.815252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.815278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.815455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.815484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.815650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.815678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.815894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.815923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.816118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.816143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.816317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.816344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.816516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.816542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.816710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.816736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.816967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.816993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.817165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.817190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.817385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.817413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.817679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.817707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.817930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.817956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.818159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.818187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.818347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.818379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.818561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.818585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.818779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.818808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.819071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.819100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.819317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.819342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.819491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.819516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.819703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.819731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.819929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.819955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.820111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.820136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.820319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.820346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.820599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.820624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.820828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.820857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.821054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.821082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.821281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.821306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.821495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.821523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.821745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.821773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-15 22:48:41.821945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-15 22:48:41.821971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.822142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.822170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.822377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.822403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.822599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.822624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.822784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.822812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.823006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.823035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.823242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.823268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.823437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.823462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.823654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.823682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.823848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.823873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.824048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.824076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.824268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.824296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.824496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.824521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.824704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.824732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.824892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.824921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.825117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.825142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.825340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.825368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.825552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.825580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.825740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.825765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.825912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.825943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.826086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.826111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.826288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.826313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.826506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.826534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.826752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.826780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.826984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.827010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.827152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.827181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.827353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.827381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.827591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.827621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.827819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.827847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.828048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.828075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.828227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.828252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.828421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.828449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.828632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.828661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.828848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.828888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.829094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.829122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.829338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.829366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.829588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.829613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.829807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.829835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.830087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.830113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.830305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.830331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.830596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.830624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.830841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-15 22:48:41.830869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-15 22:48:41.831072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.831098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.831278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.831306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.831472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.831501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.831669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.831696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.831893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.831923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.832142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.832170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.832436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.832461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.832663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.832691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.832886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.832915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.833138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.833163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.833364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.833392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.833598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.833624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.833796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.833821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.834032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.834061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.834257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.834286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.834483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.834508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.834698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.834727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.834921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.834950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.835145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.835171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.835392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.835421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.835611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.835640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.835856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.835894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.836122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.836150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.836346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.836374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.836568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.836593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.836773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.836798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.836977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.837006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.837173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.837199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.837394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.837423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.837612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.837641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.837800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.837825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.838090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.838119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.838313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.838342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.838565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.838591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.838752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.838780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.839005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.839034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.839222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.839247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.839441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.839468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.839688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.839716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.839912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.839938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.840102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.840130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-15 22:48:41.840309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-15 22:48:41.840337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.840511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.840536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.840728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.840756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.840979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.841008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.841170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.841199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.841344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.841369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.841556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.841584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.841765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.841790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.841989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.842019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.842212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.842240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.842440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.842470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.842691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.842719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.842945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.842975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.843146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.843172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.843339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.843365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.843532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.843558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.843810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.843835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.844076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.844102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.844274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.844302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.844466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.844491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.844682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.844710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.844908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.844938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.845117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.845142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.845330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.845358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.845520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.845550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.845742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.845767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.845973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.846003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.846173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.846208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.846398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.846423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.846611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.846639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.846823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.846851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.847028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.847055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.847246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.847275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.847436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.847464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.847637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-15 22:48:41.847663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-15 22:48:41.847920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.847949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.848176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.848205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.848442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.848467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.848699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.848728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.848984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.849013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.849206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.849239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.849440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.849468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.849654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.849683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.849977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.850003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.850172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.850201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.850399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.850428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.850622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.850648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.850899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.850942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.851117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.851143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.851330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.851356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.851525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.851553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.851711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.851739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.851939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.851965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.852166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.852204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.852472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.852500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.852705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.852730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.852927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.852957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.853117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.853146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.853344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.853369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.853590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.853617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.853820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.853848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.854055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.854081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.854275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.854303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.854524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.854550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.854750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.854776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.854987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.855015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.855206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.855235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.855425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.855450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.855675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.855703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.855930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.855956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.856101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.856126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.856360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.856388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.856616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.856641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.856839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.856864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.857049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.857077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.857268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-15 22:48:41.857298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-15 22:48:41.857471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.857496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.857673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.857698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.857875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.857919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.858133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.858158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.858389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.858417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.858610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.858639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.858834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.858860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.859070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.859099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.859291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.859319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.859504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.859529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.859749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.859777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.859976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.860005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.860174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.860200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.860391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.860418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.860616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.860641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.860816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.860841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.861050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.861079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.861247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.861276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.861531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.861556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.861784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.861812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.862006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.862035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.862204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.862229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.862428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.862456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.862652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.862680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.862873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.862903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.863100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.863128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.863294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.863323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.863576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.863601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.863823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.863851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.864056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.864082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.864245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.864270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.864462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.864490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.864718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.864744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.864917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.864943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.865203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.865231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.865420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.865448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.865641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.865667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.865858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.865892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.866079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.866107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.866340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.866365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.866530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.866559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-15 22:48:41.866755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-15 22:48:41.866782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.867040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.867066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.867264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.867296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.867476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.867504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.867700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.867725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.867935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.867963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.868187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.868212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.868416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.868442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.868635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.868663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.868851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.868896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.869093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.869119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.869296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.869324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.869517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.869545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.869732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.869757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.869964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.869993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.870210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.870238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.870441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.870466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.870628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.870656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.870872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.870906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.871065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.871091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.871287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.871317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.871512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.871540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.871768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.871794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.871980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.872009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.872173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.872201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.872401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.872426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.872602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.872627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.872888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.872925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.873127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.873153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.873345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.873377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.873641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.873669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.873936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.873961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.874140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.874183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.874381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.874409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.874628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.874653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.874855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.874889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.875089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.875114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.875263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.875288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.875506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.875534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.875737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.875766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.875932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.875957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.876144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-15 22:48:41.876171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-15 22:48:41.876369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.876397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.876594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.876619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.876816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.876844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.877060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.877086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.877282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.877307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.877532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.877560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.877782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.877809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.878040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.878066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.878251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.878280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.878447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.878476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.878685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.878710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.878914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.878943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.879133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.879161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.879327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.879352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.879543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.879571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.879796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.879824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.880031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.880057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.880255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.880282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.880470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.880498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.880698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.880723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.880920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.880949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.881140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.881168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.881424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.881449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.881659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.881686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.881888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.881917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.882087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.882112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.882299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.882326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.882513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.882541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.882836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.882896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.883093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.883118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.883324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.883352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.883523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.883548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.883756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.883797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.883998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.884024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.884177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.884201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.884395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.884423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.884639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.884666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.884893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.884920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.885091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.885127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.885311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.885339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.885509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-15 22:48:41.885534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-15 22:48:41.885793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.885821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.886020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.886048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.886217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.886242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.886434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.886461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.886623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.886652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.886810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.886835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.887031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.887059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.887249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.887278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.887475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.887500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.887666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.887694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.887889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.887918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.888114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.888139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.888315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.888343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.888525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.888552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.888714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.888743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.888974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.889003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.889203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.889231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.889425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.889450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.889671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.889698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.889869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.889920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.890095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.890120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.890268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.890294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.890482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.890508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.890723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.890748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.890951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.890979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.891171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.891199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.891371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.891395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.891597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.891624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.891796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.891824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.892031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.892057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.892213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.892241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.892445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.892473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.892665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.892690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.892888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.892916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.893101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.893129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-15 22:48:41.893345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-15 22:48:41.893371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.893565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.893593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.893780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.893805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.893988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.894014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.894187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.894215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.894371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.894397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.894590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.894615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.894820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.894848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.895045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.895071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.895217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.895242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.895413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.895441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.895629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.895657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.895882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.895908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.896081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.896109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.896299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.896327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.896521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.896546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.896738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.896766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.896939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.896968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.897128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.897153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.897319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.897348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.897519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.897551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.897747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.897773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.897922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.897949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.898152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.898178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.898361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.898386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.898561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.898586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.898806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.898834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.899032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.899058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.899271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.899299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.899511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.899539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.899700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.899725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.899920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.899949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.900146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.900174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.900393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.900418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.900626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.900654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.900842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.900869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.901067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.901092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.901286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.901313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.901471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.901498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.901694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.901719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.901892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.901920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.902109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-15 22:48:41.902137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-15 22:48:41.902349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.902374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.902538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.902565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.902749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.902777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.902982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.903007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.903164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.903189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.903383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.903414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.903613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.903638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.903804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.903832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.904015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.904041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.904208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.904233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.904431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.904459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.904652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.904681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.904896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.904923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.905109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.905136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.905294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.905323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.905545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.905571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.905766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.905794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.905989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.906018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.906205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.906230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.906385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.906410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.906605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.906633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.906833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.906860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.907043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.907072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.907233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.907261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.907458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.907483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.907677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.907705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.907893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.907921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.908120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.908145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.908293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.908319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.908544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.908571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.908794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.908819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.909010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.909039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.909263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.909291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.909522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.909547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.909774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.909802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.910004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.910033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.910219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.910244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.910439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.910468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.910686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.910714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.910901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.910927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.911132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-15 22:48:41.911160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-15 22:48:41.911317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.911346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.911562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.911587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.911757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.911785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.911989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.912017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.912214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.912239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.912399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.912432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.912617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.912645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.912896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.912942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.913101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.913126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.913285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.913311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.913509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.913535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.913711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.913739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.913966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.913993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.914143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.914168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.914343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.914371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.914556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.914584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.914752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.914778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.914943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.914971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.915157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.915184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.915360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.915385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.915555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.915584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.915770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.915798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.915980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.916007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.916201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.916229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.916421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.916449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.916618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.916643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.916836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.916865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.917077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.917105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.917307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.917332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.917503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.917528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.917732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.917760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.917947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.917974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.918137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.918165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.918412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.918437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.918617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.918642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.918805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.918833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.919046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.919072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.919281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.919306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.919506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.919533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.919693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.919723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.919921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.919947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-15 22:48:41.920150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-15 22:48:41.920179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.920378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.920406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.920634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.920659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.920823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.920851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.921061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.921086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.921267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.921292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.921458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.921487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.921658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.921685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.921883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.921909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.922134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.922162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.922371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.922399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.922592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.922618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.922811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.922839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.923042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.923067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.923208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.923233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.923425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.923454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.923653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.923681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.923873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.923907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.924086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.924111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.924331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.924360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.924558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.924583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.924788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.924815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.925013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.925042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.925231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.925256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.925421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.925449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.925664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.925692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.925896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.925926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.926102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.926128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.926297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.926325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.926525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.926550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.926827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.926884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.927060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.927088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.927296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.927325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.927523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.927551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.927771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.927799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.928002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-15 22:48:41.928028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-15 22:48:41.928183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.928208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.928381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.928406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.928600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.928625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.928820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.928848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.929090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.929116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.929268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.929293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.929439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.929465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.929655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.929680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.929852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.929887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.930085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.930114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.930316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.930344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.930545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.930569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.930769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.930797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.931008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.931034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.931202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.931227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.931395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.931423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.931580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.931608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.931825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.931850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.932011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.932036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.932207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.932234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.932455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.932480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.932712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.932740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.932910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.932938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.933107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.933132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.933320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.933346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.933548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.933576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.933762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.933787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.933941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.933967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.934142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.934168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.934322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.934347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.934543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.934572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.934770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.934798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.934967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.934993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.935215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.935243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.935436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.935464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.935639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.935664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.935860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.935893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.936126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.936155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.936358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.936383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.936609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.936638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.936833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.936860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-15 22:48:41.937070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-15 22:48:41.937096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.937268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.937293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.937488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.937518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.937731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.937757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.937957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.937986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.938180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.938209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.938376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.938402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.938557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.938583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.938779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.938805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.938977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.939003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.939208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.939237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.939423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.939451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.939637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.939662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.939853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.939895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.940087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.940115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.940305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.940330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.940524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.940552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.940740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.940768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.940964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.940989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.941151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.941179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.941371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.941399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.941593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.941618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.941834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.941862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.942070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.942103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.942295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.942320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.942494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.942522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.942714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.942742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.942939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.942966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.943134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.943162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.943326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.943355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.943545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.943571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.943792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.943819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.943987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.944013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.944156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.944182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.944407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.944435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.944590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.944618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.944788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.944813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.945005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.945034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.945223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.945251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.945441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.945467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.945631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.945659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-15 22:48:41.945845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-15 22:48:41.945873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.946046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.946071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.946253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.946282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.946462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.946489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.946687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.946712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.946860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.946905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.947073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.947098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.947295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.947320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.947515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.947544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.947728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.947756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.947922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.947948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.948165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.948193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.948359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.948387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.948571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.948597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.948817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.948846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.949051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.949077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.949225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.949250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.949427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.949452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.949642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.949670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.949863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.949893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.950064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.950092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.950264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.950292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.950491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.950516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.950665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.950694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.950905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.950934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.951122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.951147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.951321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.951349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.951500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.951528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.951700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.951727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.951929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.951958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.952121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.952149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.952344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.952369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.952564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.952592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.952778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.952806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.953010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.953036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-15 22:48:41.953184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-15 22:48:41.953210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.792 [2024-07-15 22:48:41.953352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.792 [2024-07-15 22:48:41.953379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.792 qpair failed and we were unable to recover it. 00:24:58.792 [2024-07-15 22:48:41.953586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.792 [2024-07-15 22:48:41.953612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.792 qpair failed and we were unable to recover it. 00:24:58.792 [2024-07-15 22:48:41.953810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.792 [2024-07-15 22:48:41.953839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.792 qpair failed and we were unable to recover it. 00:24:58.792 [2024-07-15 22:48:41.954065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.792 [2024-07-15 22:48:41.954094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.954261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.954286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.954426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.954468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.954661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.954689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.954918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.954945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.955099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.955124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.955315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.955343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.955542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.955567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.955727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.955756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.955921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.955950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.956145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.956170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.956397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.956430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.956597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.956625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.956798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.956824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.957015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.957044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.957234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.957262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.957460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.957485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.957638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.957663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.957854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.957886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.958106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.958131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.958290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.958319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.958536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.958564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.958758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.958784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.958936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.958961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.959154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.959182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.959373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.959399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.959587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.959615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.959786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.959814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.960000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.960026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.960218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.960246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.960437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.960465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.960691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.960716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.960893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.960922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.961080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.961110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.961344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.961369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.961576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.961604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.961791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.961819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.961994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.962020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.962185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.962213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.962378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.962407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.962624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.962649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.962847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.962884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.963080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.963106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.963277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.963302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.963540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.963568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.963755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.963783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.963974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.964000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.964200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.964229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.964399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.793 [2024-07-15 22:48:41.964427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.793 qpair failed and we were unable to recover it. 00:24:58.793 [2024-07-15 22:48:41.964606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.964633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.964856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.964893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.965073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.965102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.965300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.965330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.965486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.965511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.965651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.965677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.965851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.965883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.966063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.966091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.966279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.966307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.966499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.966524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.966678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.966703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.966881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.966907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.967051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.967076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.967239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.967266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.967455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.967483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.967701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.967726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.967907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.967936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.968098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.968127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.968350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.968376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.968572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.968599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.968793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.968818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.968991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.969017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.969193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.969221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.969417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.969444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.969620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.969645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.969846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.969874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.970074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.970102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.970278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.970303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.970445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.970470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.970656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.970684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.970847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.970883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.971071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.971099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.971281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.971309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.971465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.971490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.971684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.971712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.971887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.971915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.972112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.972137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.972292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.972321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.972536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.972564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.972733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.972759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.972901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.972927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.973144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.973172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.973378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.973403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.973621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.973649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.973818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.973846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.974039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.974064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.974231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.974259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.974477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.974505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.974730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.974755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.794 [2024-07-15 22:48:41.974963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.794 [2024-07-15 22:48:41.974989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.794 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.975211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.975239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.975407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.975432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.975583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.975608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.975748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.975789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.975990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.976016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.976210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.976238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.976391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.976419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.976640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.976665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.976841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.976869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.977091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.977117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.977280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.977305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.977528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.977556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.977752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.977781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.978009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.978036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.978232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.978260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.978452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.978480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.978669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.978694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.978888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.978917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.979087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.979114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.979288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.979314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.979516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.979544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.979743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.979776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.979945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.979971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.980170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.980198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.980368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.980396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.980592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.980618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.980791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.980818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.981008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.981034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.981232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.981258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.981435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.981460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.981634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.981659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.981859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.981892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.982093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.982121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.982323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.982349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.982492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.982517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.982698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.982723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.982920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.982949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.983134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.983159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.983319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.983345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.983519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.983545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.983691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.983718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.983926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.983954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.984112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.984140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.984368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.984393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.984567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.984596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.984788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.984818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.984985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.985011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.985184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.985210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.795 qpair failed and we were unable to recover it. 00:24:58.795 [2024-07-15 22:48:41.985376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.795 [2024-07-15 22:48:41.985402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.985606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.985631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.985804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.985834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.986061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.986090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.986268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.986293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.986496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.986524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.986722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.986747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.986924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.986950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.987113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.987141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.987366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.987395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.987585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.987611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.987808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.987836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.988054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.988080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.988256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.988281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.988478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.988507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.988688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.988716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.988914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.988940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.989100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.989129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.989335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.989363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.989598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.989623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.989790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.989819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.990019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.990048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.990256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.990282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.990479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.990507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.990701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.990727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.990901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.990928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.991100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.991128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.991291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.991319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.991518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.991544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.991744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.991773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.991988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.992016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.992183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.992209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.992382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.992408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.992617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.992644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.992841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.992866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.993074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.993102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.993325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.993353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.796 [2024-07-15 22:48:41.993539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.796 [2024-07-15 22:48:41.993564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.796 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.993729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.993757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.993923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.993950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.994144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.994169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.994391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.994424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.994617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.994645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.994837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.994863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.995086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.995115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.995304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.995332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.995528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.995553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.995713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.995742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.995923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.995951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.996122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.996148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.996342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.996370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.996598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.996623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.996798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.996823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.997010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.997039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.997223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.997252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.997462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.997487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.997658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.997685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.997840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.997868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.998093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.998118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.998279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.998306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.998491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.998518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.998702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.998727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.998894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.998923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.999119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.999144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.999312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.999337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.999494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.999522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.999706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.999734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:41.999897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:41.999923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.000063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.000107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.000302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.000330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.000517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.000542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.000734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.000762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.000954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.000984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.001144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.001169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.001351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.001379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.001579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.001607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.001803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.001828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.001983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.002012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.002204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.002232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.002401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.002426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.002624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.002652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.002811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.002838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.003018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.003044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.797 qpair failed and we were unable to recover it. 00:24:58.797 [2024-07-15 22:48:42.003197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.797 [2024-07-15 22:48:42.003223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.003388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.003413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.003582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.003607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.003778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.003806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.003962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.003992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.004178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.004203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.004401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.004429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.004613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.004641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.004820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.004845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.005021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.005047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.005258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.005286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.005487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.005513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.005702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.005731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.005893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.005922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.006087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.006113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.006258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.006283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.006518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.006546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.006745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.006770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.006990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.007018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.007220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.007246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.007397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.007422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.007646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.007674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.007845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.007873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.008081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.008107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.008260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.008285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.008504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.008532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.008709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.008738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.008893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.008919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.009098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.009125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.009368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.009394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.009620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.009645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.009840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.009868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.010075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.010100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.010270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.010298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.010497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.010525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.010715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.010740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.010893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.010923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.011096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.011121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.011346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.011371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.011573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.011601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.011774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.011802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.011976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.012003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.012198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.012226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.012413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.012441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.012657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.798 [2024-07-15 22:48:42.012682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.798 qpair failed and we were unable to recover it. 00:24:58.798 [2024-07-15 22:48:42.012896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.012926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.013088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.013116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.013308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.013333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.013551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.013579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.013773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.013802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.013975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.014002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.014227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.014255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.014422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.014451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.014659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.014685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.014840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.014866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.015023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.015066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.015271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.015296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.015493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.015521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.015717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.015746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.015920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.015945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.016126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.016152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.016389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.016414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.016554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.016579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.016743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.016771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.016993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.017022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.017198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.017223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.017392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.017421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.017593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.017625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.017785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.017810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.017976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.018005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.018196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.018224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.018456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.018481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.018682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.018710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.018886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.018914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.019109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.019134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.019292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.019318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.019485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.019510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.019654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.019680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.019873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.019909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.020077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.020105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.020295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.020320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.020505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.020533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.020726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.020755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.020945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.020971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.021165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.021193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.021386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.021414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.021634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.021660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.021823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.799 [2024-07-15 22:48:42.021851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.799 qpair failed and we were unable to recover it. 00:24:58.799 [2024-07-15 22:48:42.022031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.022057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.022257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.022282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.022448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.022477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.022649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.022677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.022874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.022907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.023057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.023082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.023222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.023252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.023423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.023448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.023644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.023673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.023860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.023895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.024086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.024111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.024302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.024330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.024516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.024544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.024714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.024741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.024929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.024959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.025123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.025151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.025347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.025374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.025597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.025625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.025779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.025808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.025968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.025994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.026187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.026215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.026377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.026405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.026601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.026626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.026795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.026823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.027041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.027070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.027267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.027292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.027488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.027516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.027703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.027732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.027917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.027943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.028095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.028138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.028331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.028360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.028524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.028549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.028703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.028729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.028869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.028902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.029081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.029107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.029277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.029306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.029493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.800 [2024-07-15 22:48:42.029521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.800 qpair failed and we were unable to recover it. 00:24:58.800 [2024-07-15 22:48:42.029710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.029738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.029919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.029946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.030084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.030110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.030327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.030352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.030558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.030586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.030779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.030807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.031003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.031030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.031197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.031223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.031393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.031421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.031646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.031671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.031837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.031869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.032069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.032097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.032292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.032317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.032491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.032517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.032690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.032716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.032889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.032915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.033132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.033160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.033333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.033361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.033535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.033560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.033764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.033789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.034004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.034034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.034224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.034249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.034419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.034447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.034637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.034663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.034865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.034898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.035073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.035101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.035282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.035310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.035476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.035501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.035689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.035717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.035892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.035922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.036081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.036107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.036256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.036299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.036492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.036520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.036708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.036734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.036929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.036958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.037151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.037179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.037336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.037361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.037555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.037588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.037791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.037816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.037986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.038013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.038209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.038237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.038426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.038454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.801 [2024-07-15 22:48:42.038621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.801 [2024-07-15 22:48:42.038646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.801 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.038839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.038867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.039030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.039058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.039230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.039255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.039422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.039450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.039666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.039694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.039918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.039944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.040115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.040143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.040333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.040361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.040519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.040545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.040737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.040765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.040954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.040984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.041181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.041206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.041406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.041435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.041618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.041647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.041869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.041914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.042116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.042144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.042309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.042338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.042540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.042565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.042800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.042825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.043004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.043030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.043230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.043255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.043436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.043464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.043667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.043692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.043862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.043893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.044067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.044095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.044288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.044316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.044515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.044540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.044720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.044745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.044929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.044955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.045190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.045215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.045370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.045395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.045551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.045576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.045771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.045796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.045992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.046019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.046213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.046240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.046437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.046466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.046684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.046712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.046932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.046960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.047149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.047174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.047375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.047400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.047626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.047661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.047866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.802 [2024-07-15 22:48:42.047900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.802 qpair failed and we were unable to recover it. 00:24:58.802 [2024-07-15 22:48:42.048074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.048103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.048297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.048325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.048517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.048544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.048745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.048775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.048991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.049018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.049186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.049213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.049440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.049470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.049711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.049741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.049937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.049965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.050162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.050192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.050388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.050415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.050596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.050623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.050836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.050866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.051074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.051101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.051267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.051295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.051492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.051523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.051747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.051773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.051957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.051984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.052202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.052232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.052427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.052457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.052665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.052696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.052897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.052928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.053093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.053123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.053315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.053342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.053532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.053562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.053761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.053788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.053935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.053963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.054156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.054185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.054409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.054436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.054621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.054647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.054882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.054923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.055155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.055185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.055382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.055409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.055608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.055635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.055847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.055885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.056076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.056103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.056291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.056321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.056484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.056514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.056708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.056734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.056962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.056993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.057189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.803 [2024-07-15 22:48:42.057219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.803 qpair failed and we were unable to recover it. 00:24:58.803 [2024-07-15 22:48:42.057436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.057463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.057662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.057691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.057893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.057928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.058123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.058150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.058320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.058349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.058566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.058595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.058790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.058816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.059052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.059080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.059274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.059302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.059502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.059529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.059721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.059751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.059924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.059957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.060179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.060207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.060435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.060465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.060690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.060719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.060933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.060961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.061135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.061179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.061361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.061390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.061581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.061608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.061836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.061865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.062066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.062097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.062251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.062278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.062498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.062527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.062720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.062749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.062926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.062954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.063133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.063161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.063365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.063409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.063611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.063638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.063789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.063816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.064040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.064071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.064256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.064284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.064431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.064458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.064669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.064712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.065141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.065170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.065423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.804 [2024-07-15 22:48:42.065453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.804 qpair failed and we were unable to recover it. 00:24:58.804 [2024-07-15 22:48:42.065611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.065641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.065867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.065902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.066078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.066107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.066280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.066306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.066505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.066531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.066810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.066860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.067088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.067118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.067306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.067332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.067517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.067547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.067735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.067764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.067966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.067994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.068180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.068211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.068425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.068454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.068655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.068682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.068886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.068916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.069114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.069143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.069336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.069363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.069536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.069562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.069781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.069810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.070056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.070083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.070230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.070275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.070469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.070498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.070716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.070742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.070949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.070976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.071166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.071195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.071397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.071424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.071630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.071673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.071847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.071885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.072111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.072137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.072335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.072364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.072587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.072613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.072787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.072815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.073017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.073047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.073235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.073264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.073438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.073464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.073661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.073688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.073893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.073923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.074145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.074172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.074363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.805 [2024-07-15 22:48:42.074393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.805 qpair failed and we were unable to recover it. 00:24:58.805 [2024-07-15 22:48:42.074593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.074619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.074792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.074819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.074956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.074984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.075153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.075180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.075356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.075384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.075558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.075584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.075808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.075837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.076049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.076076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.076247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.076274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.076466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.076495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.076666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.076693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.076896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.076926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.077086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.077115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.077307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.077333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.077513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.077544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.077717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.077747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.077944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.077972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.078143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.078170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.078320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.078365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.078590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.078617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.078817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.078846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.079088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.079115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.079317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.079344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.079515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.079545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.079742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.079772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.079964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.079991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.080192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.080221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.080391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.080421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.080626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.080652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.080791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.080819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.081000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.081028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.081273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.081300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.081503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.081535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.081753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.081783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.081986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.082013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.082184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.082211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.082430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.806 [2024-07-15 22:48:42.082460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.806 qpair failed and we were unable to recover it. 00:24:58.806 [2024-07-15 22:48:42.082625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.082652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.082847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.082883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.083081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.083111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.083345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.083372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.083549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.083579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.083779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.083808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.084026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.084053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.084226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.084256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.084447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.084477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.084709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.084736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.084965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.084995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.085165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.085195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.085417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.085444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.085651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.085681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.085872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.085915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.086109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.086136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.086332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.086362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.086521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.086550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.086770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.086801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.087037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.087068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.087267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.087294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.087441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.087468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.087692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.087721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.087943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.087971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.088176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.088203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.088365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.088396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.088596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.088626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.088825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.088851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.089062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.089092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.089298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.089327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.089508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.089535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.089733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.089762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.089961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.089991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.090177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.807 [2024-07-15 22:48:42.090204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.807 qpair failed and we were unable to recover it. 00:24:58.807 [2024-07-15 22:48:42.090358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.090385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.090585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.090611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.090823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.090851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.091063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.091093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.091289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.091318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.091539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.091566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.091763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.091792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.091977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.092007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.092200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.092226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.092455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.092484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.092703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.092733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.092935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.092967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.093143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.093173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.093366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.093396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.093616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.093642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.093782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.093809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.094029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.094059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.094237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.094263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.094456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.094487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.094701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.094731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.094929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.094956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.095128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.095155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.095370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.095400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.095602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.095629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.095815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.095845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.096057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.096085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.096226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.096253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.096453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.096483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.096684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.096710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.096862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.096900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.097104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.097129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.097361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.808 [2024-07-15 22:48:42.097390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.808 qpair failed and we were unable to recover it. 00:24:58.808 [2024-07-15 22:48:42.097557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.097583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.097818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.097848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.098087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.098114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.098291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.098318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.098470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.098496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.098674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.098701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.098891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.098918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.099151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.099181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.099376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.099406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.099600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.099627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.099820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.099850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.100062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.100089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.100291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.100317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.100528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.100555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.100768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.100798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.101010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.101037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.101188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.101214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.101424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.101453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.101630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.101657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.101885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.101914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.102125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.102161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.102349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.102375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.102597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.102626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.102823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.102853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.103087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.103114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.103282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.103312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.103500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.103529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.809 [2024-07-15 22:48:42.103733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.809 [2024-07-15 22:48:42.103759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.809 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.103988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.104018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.104218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.104248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.104444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.104470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.104626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.104653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.104842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.104871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.105069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.105096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.105270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.105300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.105466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.105495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.105685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.105712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.105937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.105967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.106126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.106156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.106355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.106382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.106573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.106602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.106794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.106823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.107018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.107045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.107214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.107244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.107431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.107461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.107656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.107683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.107890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.107920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.108087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.108120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.108340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.108367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.108593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.108622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.108848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.108882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.109038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.109065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.109287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.109317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.109539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.109568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.109738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.109764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.109929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.109959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.110121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.110150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.110359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.110386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.110585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.110615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.110831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.110861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.111063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.111090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.810 qpair failed and we were unable to recover it. 00:24:58.810 [2024-07-15 22:48:42.111321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.810 [2024-07-15 22:48:42.111351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.111573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.111603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.111824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.111851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.112046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.112076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.112231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.112261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.112449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.112475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.112706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.112736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.112962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.112990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.113168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.113194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.113392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.113422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.113615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.113645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.113849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.113894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.114120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.114151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.114345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.114375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.114576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.114603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.114771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.114800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.115004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.115032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.115199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.115226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.115421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.115451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.115638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.115668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.115881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.115908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.116108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.116138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.116326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.116356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.116557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.116587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.116742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.116772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.116974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.117004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.117206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.117233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.117378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.117410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.117648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.117675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.117885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.117912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.118094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.118124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.118350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.118380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.118549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.118575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.118719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.811 [2024-07-15 22:48:42.118765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.811 qpair failed and we were unable to recover it. 00:24:58.811 [2024-07-15 22:48:42.118957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.118988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.119162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.119189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.119368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.119394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.119613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.119643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.119839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.119865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.120108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.120138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.120335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.120365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.120565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.120594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.120793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.120823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.121018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.121045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.121225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.121252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.121447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.121477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.121630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.121659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.121838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.121865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.122059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.122089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.122252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.122282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.122501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.122527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.122763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.122793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.122990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.123021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.123216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.123243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.123436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.123471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.123662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.123691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.123887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.123914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.124096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.124123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.124301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.124328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.124467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.124494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.124695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.124724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.124911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.124942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.125155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.125182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.125361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.125388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.125564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.125591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.125798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.125825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.126038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.812 [2024-07-15 22:48:42.126069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.812 qpair failed and we were unable to recover it. 00:24:58.812 [2024-07-15 22:48:42.126260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.126290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.126530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.126557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.126760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.126790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.126993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.127020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.127174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.127202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.127387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.127417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.127587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.127616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.127838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.127865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.128078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.128108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.128298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.128328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.128505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.128532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.128713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.128740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.128920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.128947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.129138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.129165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.129359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.129386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.129623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.129653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.129853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.129896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.130107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.130136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.130302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.130332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.130539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.130565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.130792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.130822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.131000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.131028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.131201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.131228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.131423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.131453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.131657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.131684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.131823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.131850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.132057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.132087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.132242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.132272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.132465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.132496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.132662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.132691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.132883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.132913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.133109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.133136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.133289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.133316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.813 qpair failed and we were unable to recover it. 00:24:58.813 [2024-07-15 22:48:42.133532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.813 [2024-07-15 22:48:42.133561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.133758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.133784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.133977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.134008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.134195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.134225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.134436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.134463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.134657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.134687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.134889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.134920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.135117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.135143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.135321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.135347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.135548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.135578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.135798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.135824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.136002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.136032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.136190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.136219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.136409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.136436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.136640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.136670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.136866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.136903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.137103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.137130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.137356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.137386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.137593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.137622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.137819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.137846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.138077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.138107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.138302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.138329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.138513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.138540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.138752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.138778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.139004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.139034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.139255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.139282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.139482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.139511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.139735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.139765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.139970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.139998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.140198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.140229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.140420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.140450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.140645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.140671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.140906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.140936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.141121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.141151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.141369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.814 [2024-07-15 22:48:42.141395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.814 qpair failed and we were unable to recover it. 00:24:58.814 [2024-07-15 22:48:42.141599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.141628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.141832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.141862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.142052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.142079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.142272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.142302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.142528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.142554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.142757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.142783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.142988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.143019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.143237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.143267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.143468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.143495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.143690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.143720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.143910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.143946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.144142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.144173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.144320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.144346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.144521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.144547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.144777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.144805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.145019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.145050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.145249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.145279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.145450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.145477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.145702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.145732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.145924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.145954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.146159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.146186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.146361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.146387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.146609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.146639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.146830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.146857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.147064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.147101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.147300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.147331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.815 qpair failed and we were unable to recover it. 00:24:58.815 [2024-07-15 22:48:42.147531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.815 [2024-07-15 22:48:42.147558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.147760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.147790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.147989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.148023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.148237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.148264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.148468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.148498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.148660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.148689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.148867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.148901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.149077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.149104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.149283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.149310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.149497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.149534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.149729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.149759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.149959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.149987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.150156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.150183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.150385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.150415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.150606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.150636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.150839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.150866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.151072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.151101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.151264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.151296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.151498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.151525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.151782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.151812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.151996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.152026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.152252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.152278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.152472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.152501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.152686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.152716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.152906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.152933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.153128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.153160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.153366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.153393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.153592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.153619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.153786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.153819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.153987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.154015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.154169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.154196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.154385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.154414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.154586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.154615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.154814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.816 [2024-07-15 22:48:42.154841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.816 qpair failed and we were unable to recover it. 00:24:58.816 [2024-07-15 22:48:42.155041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.155072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.155228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.155258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.155479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.155506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.155700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.155730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.155897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.155928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.156146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.156174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.156345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.156375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.156570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.156600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.156773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.156800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.156988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.157022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.157182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.157212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.157405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.157432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.157606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.157636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.157850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.157899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.158125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.158153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.158418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.158448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.158646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.158676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.158884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.158911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.159092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.159119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.159323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.159354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.159515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.159542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.159768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.159798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.159999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.160027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.160230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.160257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.160456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.160485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.160682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.160712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.160937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.160964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.161165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.161195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.161379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.161409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.161625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.161652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.161883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.161913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.162105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.162135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.817 qpair failed and we were unable to recover it. 00:24:58.817 [2024-07-15 22:48:42.162302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.817 [2024-07-15 22:48:42.162329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.162519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.162549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.162944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.162975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.163159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.163187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.163405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.163440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.163668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.163698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.163926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.163954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.164155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.164185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.164410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.164439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.164648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.164675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.164904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.164935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.165129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.165159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.165350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.165377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.165579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.165608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.165831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.165860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.166100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.166127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.166324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.166355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.166521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.166551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.166781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.166808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.166999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.167031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.167251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.167281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.167507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.167533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.167729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.167758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.167926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.167957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.168155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.168183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.168390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.168420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.168631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.168661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.168853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.168887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.169087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.169116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.169338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.169367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.169543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.169570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.169757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.818 [2024-07-15 22:48:42.169787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.818 qpair failed and we were unable to recover it. 00:24:58.818 [2024-07-15 22:48:42.170018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.170046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.170227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.170254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.170455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.170484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.170676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.170705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.170929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.170956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.171130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.171160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.171335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.171364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.171585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.171613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.171780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.171810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.172003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.172033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.172235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.172262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.172435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.172462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.172687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.172716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.172919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.172950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.173108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.173135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.173345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.173375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.173577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.173604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.173831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.173861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.174086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.174113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.174291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.174318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.174528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.174554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.174748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.174777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.175015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.175043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.175255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.175285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.175478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.175508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.175694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.175721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.175921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.175952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.176176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.176203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.176378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.176405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.176603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.176632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.176801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.176831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.177029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.819 [2024-07-15 22:48:42.177058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.819 qpair failed and we were unable to recover it. 00:24:58.819 [2024-07-15 22:48:42.177216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.177243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.177448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.177475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.177708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.177735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.177956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.177986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.178171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.178201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.178430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.178456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.178690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.178717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.178923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.178951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.179167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.179198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.179363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.179393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.179585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.179616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.179781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.179807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.179961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.179989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.180138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.180165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.180339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.180366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.180586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.180616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.180774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.180804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.180978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.181006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.181228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.181258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.181419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.181449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.181642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.181669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.181867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.181906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.182110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.182140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.182365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.182392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.182555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.182586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.182771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.820 [2024-07-15 22:48:42.182801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.820 qpair failed and we were unable to recover it. 00:24:58.820 [2024-07-15 22:48:42.183007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.183035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.183232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.183262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.183425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.183455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.183654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.183681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.183906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.183936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.184107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.184137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.184336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.184364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.184568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.184598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.184797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.184824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.184987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.185015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.185184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.185214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.185441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.185471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.185677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.185703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.185859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.185899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.186103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.186147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.186342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.186369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.186562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.186592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.186806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.186835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.187018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.187046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.187245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.187274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.187467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.187497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.187691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.187718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.187889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.187919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.188114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.188148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.188366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.188393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.188621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.188650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.188839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.188868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.189053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.189080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.189304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.189333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.189557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.189587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.189783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.189809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.821 [2024-07-15 22:48:42.190011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.821 [2024-07-15 22:48:42.190039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.821 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.190220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.190250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.190417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.190445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.190646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.190676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.190840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.190871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.191080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.191107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.191268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.191296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.191468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.191495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.191662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.191689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.191889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.191934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.192088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.192116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.192289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.192316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.192538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.192567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.192768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.192798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.192998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.193026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.193202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.193232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.193392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.193421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.193616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.193643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.193862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.193912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.194106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.194139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.194339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.194365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.194568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.194598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.194798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.194825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.194996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.195024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.195194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.195224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.195418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.195447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.195640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.195667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.195813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.195839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.196015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.196042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.196275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.196302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.196463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.196493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.196677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.822 [2024-07-15 22:48:42.196707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.822 qpair failed and we were unable to recover it. 00:24:58.822 [2024-07-15 22:48:42.196928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.196956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.197160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.197190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.197381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.197411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.197576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.197603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.197825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.197855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.198087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.198114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.198311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.198337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.198509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.198539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.198757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.198787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.198980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.199008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.199173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.199203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.199430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.199457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.199626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.199652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.199823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.199854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.200063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.200091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.200239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.200266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.200461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.200491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.200708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.200737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.200933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.200961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.201155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.201185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.201386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.201416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.201580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.201607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.201807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.201837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.202076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.202104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.202249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.202276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.202536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.202566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.202783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.202812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.203014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.203043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.203218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.203249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.203479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.203508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.203736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.203763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.203938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.823 [2024-07-15 22:48:42.203968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.823 qpair failed and we were unable to recover it. 00:24:58.823 [2024-07-15 22:48:42.204154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.204183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.204367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.204394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.204619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.204649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.204873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.204911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.205088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.205115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.205320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.205350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.205506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.205536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.205734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.205763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.205955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.205986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.206183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.206212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.206408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.206435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.206638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.206668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.206869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.206906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.207108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.207135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.207361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.207392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.207585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.207615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.207820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.207847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.208059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.208090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.208282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.208312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.208490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.208517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.208656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.208682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.208857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.208892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.209075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.209102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.209296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.209325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.209491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.209521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.209716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.209745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.209950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.209981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.210195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.210225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.210445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.210471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.210663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.210693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.210894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.210927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.211150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.211177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.824 [2024-07-15 22:48:42.211408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.824 [2024-07-15 22:48:42.211438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.824 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.211634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.211664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.211897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.211925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.212125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.212155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.212351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.212382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.212592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.212620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.212841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.212872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.213048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.213078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.213301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.213328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.213518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.213548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.213711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.213741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.213938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.213966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.214126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.214156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.214346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.214377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.214572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.214600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.214750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.214778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.214972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.215003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.215171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.215198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.215418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.215448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.215680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.215707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.215884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.215912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.216115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.216145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.216302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.216332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.216532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.216560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.216755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.216790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.217023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.217053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.217249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.217276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.217479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.217508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.217677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.217706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.217914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.217942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.218138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.218168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.218361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.825 [2024-07-15 22:48:42.218391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.825 qpair failed and we were unable to recover it. 00:24:58.825 [2024-07-15 22:48:42.218561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.218593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.218743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.218788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.218984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.219014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.219205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.219232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.219431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.219461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.219679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.219708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.219931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.219959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.220158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.220188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.220378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.220407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.220599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.220625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.220851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.220888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.221131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.221159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.221337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.221363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.221588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.221617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.221840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.221871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.222089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.222116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.222321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.222350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.222571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.222601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.222797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.222824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.223054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.223085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.223276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.223306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.223538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.223564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.223759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.223789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.224014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.224044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.224244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.224270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.224491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.224521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.224687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.826 [2024-07-15 22:48:42.224719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.826 qpair failed and we were unable to recover it. 00:24:58.826 [2024-07-15 22:48:42.224917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.224944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.225152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.225182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.225378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.225408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.225574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.225600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.225819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.225849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.226084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.226111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.226290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.226317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.226516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.226543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.226725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.226754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.226951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.226979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.227134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.227161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.227314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.227341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.227512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.227538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.227760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.227790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.228015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.228045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.228276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.228303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.228508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.228537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.228729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.228759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.228935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.228962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.229122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.229152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.229345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.229374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.229573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.229601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.229794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.229824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.230027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.230054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.230207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.230233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.230432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.230461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.230692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.230722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.230958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.230986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.231218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.231248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.231437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.231467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.231693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.231720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.231910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.231940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.232132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.232162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.232359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.827 [2024-07-15 22:48:42.232386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.827 qpair failed and we were unable to recover it. 00:24:58.827 [2024-07-15 22:48:42.232563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.232590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.232785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.232815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.233010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.233037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.233191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.233218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.233396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.233423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.233628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.233655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.233823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.233853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.234087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.234118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.234322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.234349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.234549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.234579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.234748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.234787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.234991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.235028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.235235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.235265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.235429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.235459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.235646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.235673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.235851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.235885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.236089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.236116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.236287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.236314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.236513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.236542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.236732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.236760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.236957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.236985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.237190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.237220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.237439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.237468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.237695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.237721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.237901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.237928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.238081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.238108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.238274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.238301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.238517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.238546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.238738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.238768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.238933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.238961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.239150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.239179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.239372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.239401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.239558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.239585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.239734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.239778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.239941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.239971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.828 [2024-07-15 22:48:42.240197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.828 [2024-07-15 22:48:42.240223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.828 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.240411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.240441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.240633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.240662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.240856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.240890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.241109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.241138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.241304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.241334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.241527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.241554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.241773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.241802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.242018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.242048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.242215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.242242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.242400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.242430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.242647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.242677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.242835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.242862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.243052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.243098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.243292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.243322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.243517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.243543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.243738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.243767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.243956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.243986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.244214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.244240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.244442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.244471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.244633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.244663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.244891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.244919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.245092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.245122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.245346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.245373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.245545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.245571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.245709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.245735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.245964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.245994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.246217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.246244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.246437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.246467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.246656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.246685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.246873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.246908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.247103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.247133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.247300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.247330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.247558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.247584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.247786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.247816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.248010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.248042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.248269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.829 [2024-07-15 22:48:42.248296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.829 qpair failed and we were unable to recover it. 00:24:58.829 [2024-07-15 22:48:42.248523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.248553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.248748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.248778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.248950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.248978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.249173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.249208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.249361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.249391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.249584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.249610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.249796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.249825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.249996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.250023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.250213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.250240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.250471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.250501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.250686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.250715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.250940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.250968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.251124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.251157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.251350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.251380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.251573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.251600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.251745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.251772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.251968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.251995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.252175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.252203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.252400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.252430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.252593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.252622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.252823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.252850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.253013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.253041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.253232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.253263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.253483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.253510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.253707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.253736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.253905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.253936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.254119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.254147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.254367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.254397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.254616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.254646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.254845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.254872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.255084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.255114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.255287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.255316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.255516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.255543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.255739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.255768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.255986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.256017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.256191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.256218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.256439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.256469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.256693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.256722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.830 [2024-07-15 22:48:42.256917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.830 [2024-07-15 22:48:42.256945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.830 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.257096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.257123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.257341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.257371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.257569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.257596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.257822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.257851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.258103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.258131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.258305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.258349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.258553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.258583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.258774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.258804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.259029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.259057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.259276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.259306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.259489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.259519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.259710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.259737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.259914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.259945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.260132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.260162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.260345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.260372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.260570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.260600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.260761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.260791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.260957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.260984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.261180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.261210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.261435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.261464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.261635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.261662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.261853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.261904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.262092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.262122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.262306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.262333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.262494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.262524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.262685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.262714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.262915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.262944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.263120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.263151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.263378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.263408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.263607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.263634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.263859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.263898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.264098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.831 [2024-07-15 22:48:42.264128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.831 qpair failed and we were unable to recover it. 00:24:58.831 [2024-07-15 22:48:42.264297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.264328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.264520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.264549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.264768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.264798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.265014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.265053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.265250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.265291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.265456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.265485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.265658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.265684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.265837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.265866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.266101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.266131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.266328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.266355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.266586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.266616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.266835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.266865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.267080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.267107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.267261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.267287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.267502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.267533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.267698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.267725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.267923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.267956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.268119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.268147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.268364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.268391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.268593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.268623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.268841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.268870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.269084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.269121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.269324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.269366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.269528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.269557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.269725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.269752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.269936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.269964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.270135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.270165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.270364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.270390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.270557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.270587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.270783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.270815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.270979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.271007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.271154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.271182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.271375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.271405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.271600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.271627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.271790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.271820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:58.832 [2024-07-15 22:48:42.272020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.832 [2024-07-15 22:48:42.272047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:58.832 qpair failed and we were unable to recover it. 00:24:59.119 [2024-07-15 22:48:42.272223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.119 [2024-07-15 22:48:42.272252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.119 qpair failed and we were unable to recover it. 00:24:59.119 [2024-07-15 22:48:42.272409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.119 [2024-07-15 22:48:42.272440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.119 qpair failed and we were unable to recover it. 00:24:59.119 [2024-07-15 22:48:42.272631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.119 [2024-07-15 22:48:42.272662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.119 qpair failed and we were unable to recover it. 00:24:59.119 [2024-07-15 22:48:42.272824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.119 [2024-07-15 22:48:42.272851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.119 qpair failed and we were unable to recover it. 00:24:59.119 [2024-07-15 22:48:42.273052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.119 [2024-07-15 22:48:42.273083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.119 qpair failed and we were unable to recover it. 00:24:59.119 [2024-07-15 22:48:42.273270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.119 [2024-07-15 22:48:42.273305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.119 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.273478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.273505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.273705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.273734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.273934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.273965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.274159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.274186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.274409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.274438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.274593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.274623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.274812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.274839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.275009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.275039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.275199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.275229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.275421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.275449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.275651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.275681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.275843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.275874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.276047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.276074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.276224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.276268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.276465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.276495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.276694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.276722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.276883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.276910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.277065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.277094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.277273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.277300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.277504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.277534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.277704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.277734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.277923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.277951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.278103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.278130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.278327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.278357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.278558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.278586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.278783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.278813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.278978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.279010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.279183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.279210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.279404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.279434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.279630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.279659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.279888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.279916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.280113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.280143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.280305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.280335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.280558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.280585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.280786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.280817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.280989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.281020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.281224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.281251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.281399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.281427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.281626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.281656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.281853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.281889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.282054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.120 [2024-07-15 22:48:42.282082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.120 qpair failed and we were unable to recover it. 00:24:59.120 [2024-07-15 22:48:42.282255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.282282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.282462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.282489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.282685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.282715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.282872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.282910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.283131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.283158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.283362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.283392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.283583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.283613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.283846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.283873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.284117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.284145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.284352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.284381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.284560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.284587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.284786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.284813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.285026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.285054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.285217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.285245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.285405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.285435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.285653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.285683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.285853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.285895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.286059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.286086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.286265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.286293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.286472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.286500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.286694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.286724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.286916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.286947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.287154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.287182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.287397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.287426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.287590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.287620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.287836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.287863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.288069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.288104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.288325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.288355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.288551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.288579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.288818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.288845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.289035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.289064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.289228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.289255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.289441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.289472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.289642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.289683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.289905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.289933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.290147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.121 [2024-07-15 22:48:42.290177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.121 qpair failed and we were unable to recover it. 00:24:59.121 [2024-07-15 22:48:42.290393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.290424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.290620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.290647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.290816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.290847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.291046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.291074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.291255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.291284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.291500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.291530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.291764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.291791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.291966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.291994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.292216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.292246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.292415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.292445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.292666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.292693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.292891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.292922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.293091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.293122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.293322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.293349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.293527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.293554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.293779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.293809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.294013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.294041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.294207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.294238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.294447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.294478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.294674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.294702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.294903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.294932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.295130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.295160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.295332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.295359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.295583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.295613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.295821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.295850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.296055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.296082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.296268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.296298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.296497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.296527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.296747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.296774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.296973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.297004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.297207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.297237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.297464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.297491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.297703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.297735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.297922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.297953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.298131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.298159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.298391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.298421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.298652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.298682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.298900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.298927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.299116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.299146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.299342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.299372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.299596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.299623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.122 [2024-07-15 22:48:42.299799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.122 [2024-07-15 22:48:42.299826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.122 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.300048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.300078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.300265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.300292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.300499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.300529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.300697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.300726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.300921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.300949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.301152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.301182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.301376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.301406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.301613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.301640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.301839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.301870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.302115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.302145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.302347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.302375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.302602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.302633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.302802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.302832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.303040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.303068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.303267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.303296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.303491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.303519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.303690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.303721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.303928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.303958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.304126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.304156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.304352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.304379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.304601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.304631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.304855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.304895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.305068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.305095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.305296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.305326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.305513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.305542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.305710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.305737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.305934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.305965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.306156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.306186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.306409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.306437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.306666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.306696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.306929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.306959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.307181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.307208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.307432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.307463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.307682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.307712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.307889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.307929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.308132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.308162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.123 qpair failed and we were unable to recover it. 00:24:59.123 [2024-07-15 22:48:42.308343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.123 [2024-07-15 22:48:42.308373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.308570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.308597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.308782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.308812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.308994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.309021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.309199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.309226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.309412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.309442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.309622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.309652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.309856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.309901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.310129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.310159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.310351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.310382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.310576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.310603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.310803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.310833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.311040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.311079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.311226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.311251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.311447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.311477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.311641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.311670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.311843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.311870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.312052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.312082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.312269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.312299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.312464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.312491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.312641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.312669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.312869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.312924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.313122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.313149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.313312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.313342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.313557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.313587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.313780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.313807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.314035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.314065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.314289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.314319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.314519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.314546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.314713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.314744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.314932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.314963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.315163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.315191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.315346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.315373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.315545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.315572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.315753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.124 [2024-07-15 22:48:42.315779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.124 qpair failed and we were unable to recover it. 00:24:59.124 [2024-07-15 22:48:42.316002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.316032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.316202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.316233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.316429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.316456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.316660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.316690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.316871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.316924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.317119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.317146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.317317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.317347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.317507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.317537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.317738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.317765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.317938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.317968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.318159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.318200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.318426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.318453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.318631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.318661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.318886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.318921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.319088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.319115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.319311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.319340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.319532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.319562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.319749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.319776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.319963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.319993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.320184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.320214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.320419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.320446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.320647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.320673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.320884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.320914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.321105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.321132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.321294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.321323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.321513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.321543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.321709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.321736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.321935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.321966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.322122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.322152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.322342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.322369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.322593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.322624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.322785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.322815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.323034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.323061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.323230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.323260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.323455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.323485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.323659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.323686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.323930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.125 [2024-07-15 22:48:42.323958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.125 qpair failed and we were unable to recover it. 00:24:59.125 [2024-07-15 22:48:42.324175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.324205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.324402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.324429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.324633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.324662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.324858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.324894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.325162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.325189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.325404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.325433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.325605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.325636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.325859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.325901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.326110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.326140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.326364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.326394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.326574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.326600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.326779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.326806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.326985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.327013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.327229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.327256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.327451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.327481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.327695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.327725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.327894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.327922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.328084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.328118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.328354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.328381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.328591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.328618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.328816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.328846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.329052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.329088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.329270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.329298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.329518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.329548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.329726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.329757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.329975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.330003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.330234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.330264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.330458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.330488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.330657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.330684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.330836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.330866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.331048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.331078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.331312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.331339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.331568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.331599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.331797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.331827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.331997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.332024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.332212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.332242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.332425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.332455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.332681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.332708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.332911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.332944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.333168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.333198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.333371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.333398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.333572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.126 [2024-07-15 22:48:42.333599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.126 qpair failed and we were unable to recover it. 00:24:59.126 [2024-07-15 22:48:42.333796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.333826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.334018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.334046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.334244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.334278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.334467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.334497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.334722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.334749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.334979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.335006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.335156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.335183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.335360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.335387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.335585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.335615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.335834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.335863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.336113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.336140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.336366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.336396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.336589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.336619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.336849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.336885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.337101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.337133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.337328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.337359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.337553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.337580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.337775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.337805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.338026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.338056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.338250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.338277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.338474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.338504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.338725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.338755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.338933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.338961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.339129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.339156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.339383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.339410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.339558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.339585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.339780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.339809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.340005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.340035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.340207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.340234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.340400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.340430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.340627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.340657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.340816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.340843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.341027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.341054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.341223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.341251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.341417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.341444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.341683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.341710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.341919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.127 [2024-07-15 22:48:42.341964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.127 qpair failed and we were unable to recover it. 00:24:59.127 [2024-07-15 22:48:42.342166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.342193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.342415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.342445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.342647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.342674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.342824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.342852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.343010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.343038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.343234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.343264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.343428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.343459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.343657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.343686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.343873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.343910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.344112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.344139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.344289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.344316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.344513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.344542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.344745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.344772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.344930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.344960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.345157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.345187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.345359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.345388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.345606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.345636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.345864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.345901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.346107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.346134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.346328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.346358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.346558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.346587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.346781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.346808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.346998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.347029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.347218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.347247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.347426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.347453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.347625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.347652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.347825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.347852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.348043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.348072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.348233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.348263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.348457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.348487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.348693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.348720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.348943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.348973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.349170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.349199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.349426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.349457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.349660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.349689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.128 [2024-07-15 22:48:42.349853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.128 [2024-07-15 22:48:42.349890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.128 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.350085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.350112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.350299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.350329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.350522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.350552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.350782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.350809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.351013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.351044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.351241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.351268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.351445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.351472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.351694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.351724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.351949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.351979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.352159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.352186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.352365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.352392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.352588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.352619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.352837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.352864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.353098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.353128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.353304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.353334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.353554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.353581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.353780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.353810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.354028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.354056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.354257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.354284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.354508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.354538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.354702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.354728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.354888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.354916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.355119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.355149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.355330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.355359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.355582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.355610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.355817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.355847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.356089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.356117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.356293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.356320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.356514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.356544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.356700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.356729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.356907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.356935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.357139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.357182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.357385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.357415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.357641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.357668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.357866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.357903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.358121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.358151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.358352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.358379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.358580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.358610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.358823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.358858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.359070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.359097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.359299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.359329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.129 qpair failed and we were unable to recover it. 00:24:59.129 [2024-07-15 22:48:42.359548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.129 [2024-07-15 22:48:42.359577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.359748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.359775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.359959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.359990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.360214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.360244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.360468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.360494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.360665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.360705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.360921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.360952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.361132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.361159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.361330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.361357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.361556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.361585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.361788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.361815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.362018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.362048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.362266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.362296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.362511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.362538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.362729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.362759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.362974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.363005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.363196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.363223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.363411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.363441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.363636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.363666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.363900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.363928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.364133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.364163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.364379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.364409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.364604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.364632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.364825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.364854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.365067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.365094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.365247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.365274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.365449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.365476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.365662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.365692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.365895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.365922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.366146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.366176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.366356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.366385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.366578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.366605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.366800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.366830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.367026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.367056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.367252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.367279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.367466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.367496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.367729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.367756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.367932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.367960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.368156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.368186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.368343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.368373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.368593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.368620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.368766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.130 [2024-07-15 22:48:42.368792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.130 qpair failed and we were unable to recover it. 00:24:59.130 [2024-07-15 22:48:42.368940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.368967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.369115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.369143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.369290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.369316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.369489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.369516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.369722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.369749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.369953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.369983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.370209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.370236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.370406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.370433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.370651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.370681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.370865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.370901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.371129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.371157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.371385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.371415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.371636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.371664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.371815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.371842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.372017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.372045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.372189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.372217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.372363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.372390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.372611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.372641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.372838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.372868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.373085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.373112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.373315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.373345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.373505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.373535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.373730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.373757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.373954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.373989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.374203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.374233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.374397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.374424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.374646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.374676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.374870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.374908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.375080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.375107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.375302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.375331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.375564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.375591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.375768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.375795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.376023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.376053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.376249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.376278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.376453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.376480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.376697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.376727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.376920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.376950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.377128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.377155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.377353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.377380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.377595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.377624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.377816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.131 [2024-07-15 22:48:42.377842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.131 qpair failed and we were unable to recover it. 00:24:59.131 [2024-07-15 22:48:42.378027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.378055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.378273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.378303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.378464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.378490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.378669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.378696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.378898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.378929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.379126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.379154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.379318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.379348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.379521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.379551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.379752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.379779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.379997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.380027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.380201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.380230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.380436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.380463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.380615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.380642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.380836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.380866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.381062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.381090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.381289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.381319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.381508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.381537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.381733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.381760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.381951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.381981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.382145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.382175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.382399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.382426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.382590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.382619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.382833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.382863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.383083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.383110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.383288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.383315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.383538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.383568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.383763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.383790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.383965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.383997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.384198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.384228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.384444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.384471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.384649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.384677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.384841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.384871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.385105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.385132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.385327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.385357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.385553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.385583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.385775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.385802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.386001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.386032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.386250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.386280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.386495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.386522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.386718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.386748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.386978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.387009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.387235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.132 [2024-07-15 22:48:42.387262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.132 qpair failed and we were unable to recover it. 00:24:59.132 [2024-07-15 22:48:42.387438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.387468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.387696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.387726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.387929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.387958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.388145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.388176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.388349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.388379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.388550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.388577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.388769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.388798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.389015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.389046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.389216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.389246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.389470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.389500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.389720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.389750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.389945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.389972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.390175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.390205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.390363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.390393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.390581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.390608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.390808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.390838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.391040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.391068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.391218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.391244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.391440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.391470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.391664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.391694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.391863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.391903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.392099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.392129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.392326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.392356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.392545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.392572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.392793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.392822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.393056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.393084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.393258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.393285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.393487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.393517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.393704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.393734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.393927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.393954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.394132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.394174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.394331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.394360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.394558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.394586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.394809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.394839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.395018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.395045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.395224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.395252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.133 qpair failed and we were unable to recover it. 00:24:59.133 [2024-07-15 22:48:42.395482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.133 [2024-07-15 22:48:42.395512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.395710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.395740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.395960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.395988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.396247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.396276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.396445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.396474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.396634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.396672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.396891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.396921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.397120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.397149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.397382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.397409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.397621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.397650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.397840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.397870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.398081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.398108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.398289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.398316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.398497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.398531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.398760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.398787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.398992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.399023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.399192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.399222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.399418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.399445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.399602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.399629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.399821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.399851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.400046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.400073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.400298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.400328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.400520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.400549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.400745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.400772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.400968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.400999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.401192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.401222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.401413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.401440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1364275 Killed "${NVMF_APP[@]}" "$@" 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.401671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.401701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.401873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.401911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:59.134 [2024-07-15 22:48:42.402112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.402139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.402291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.402319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:59.134 [2024-07-15 22:48:42.402493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.402521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:59.134 [2024-07-15 22:48:42.402694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.402722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.402919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:59.134 [2024-07-15 22:48:42.402949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.134 [2024-07-15 22:48:42.403146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.403177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.403331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.403358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.403536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.403565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.403728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.403758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.403935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.403962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.134 [2024-07-15 22:48:42.404187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.134 [2024-07-15 22:48:42.404216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.134 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.404393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.404423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.404779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.404810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.405035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.405063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.405232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.405262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.405485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.405511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.405676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.405705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.405901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.405942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.406164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.406191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.406393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.406423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.406603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.406633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.406809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.406836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.407003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.407034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.407242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.407272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.407441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.407470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.407790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.407842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.408079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.408107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1364828 00:24:59.135 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:59.135 [2024-07-15 22:48:42.408273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.408300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1364828 00:24:59.135 [2024-07-15 22:48:42.408475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.408502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1364828 ']' 00:24:59.135 [2024-07-15 22:48:42.408696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.408727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:59.135 [2024-07-15 22:48:42.408929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.408957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.135 [2024-07-15 22:48:42.409161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.409191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:59.135 [2024-07-15 22:48:42.409361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 22:48:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.135 [2024-07-15 22:48:42.409392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.409585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.409611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.410062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.410096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.410298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.410328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.410532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.410560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.413893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.413933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.414197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.414238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.414496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.414535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.414773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.414814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.415073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.415115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.415343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.415380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.415594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.415635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.415838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.415891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.416128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.416165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.416372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.416426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.135 qpair failed and we were unable to recover it. 00:24:59.135 [2024-07-15 22:48:42.416622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.135 [2024-07-15 22:48:42.416665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.416913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.416951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.417162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.417204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.417450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.417492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.417725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.417761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.417946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.417987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.418185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.418224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.418474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.418511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.418720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.418762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.418987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.419026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.419220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.419258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.419484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.419526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.419783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.419823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.420076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.420115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.420350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.420387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.420607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.420645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.420865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.420914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.421120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.421162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.421416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.421456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.421678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.421713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.421944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.421987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.422187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.422228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.422431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.422467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.422662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.422701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.422930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.422966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.423175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.423217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.423440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.423480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.423701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.423742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.423987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.424024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.424221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.424260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.424471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.424510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.424713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.424751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.424998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.425037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.425226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.425266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.425493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.425528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.425731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.425785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.426004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.426042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.426254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.426289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.426505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.426542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.426758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.426798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.427024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.427061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.427271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.136 [2024-07-15 22:48:42.427324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.136 qpair failed and we were unable to recover it. 00:24:59.136 [2024-07-15 22:48:42.427537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.427577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.427823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.427860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.428134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.428171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.428374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.428425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.428644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.428681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.428987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.429026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.429247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.429282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.429512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.429551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.429745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.429784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.430015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.430056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.430301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.430340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.430543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.430580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.430789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.430827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.431055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.431095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.431325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.431362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.431554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.431590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.431786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.431823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.432030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.432066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.432275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.432313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.432529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.432565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.432762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.432797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.432987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.433026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.433222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.433263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.433465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.433509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.433707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.433749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.433979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.434017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.434248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.434285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.434483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.434520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.434700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.434736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.434962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.435000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.435198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.435247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.435449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.435484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.435719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.435755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.435970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.436007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.436212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.436249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.436453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.137 [2024-07-15 22:48:42.436488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.137 qpair failed and we were unable to recover it. 00:24:59.137 [2024-07-15 22:48:42.436687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.436726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.436919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.436958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.437180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.437216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.437417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.437454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.437654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.437691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.437902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.437938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.438132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.438180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.438381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.438418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.438619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.438655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.438848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.438953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.439185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.439221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.439420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.439458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.439659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.439696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.439921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.439960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.440160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.440203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.440383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.440426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.440629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.440665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.440893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.440932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.441138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.441179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.441374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.441408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.441601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.441638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.441863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.441911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.442114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.442151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.442381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.442419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.442594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.442629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.442681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4d0e0 (9): Bad file descriptor 00:24:59.138 [2024-07-15 22:48:42.442955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.442996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.443182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.443210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.443364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.443391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.443567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.443600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.443804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.443831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.444034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.444061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.444236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.444262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.444437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.444465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.444634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.444661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.444840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.444883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.445031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.445059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.445247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.445274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.445448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.445475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.445652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.445678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.445853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.445888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.446066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.138 [2024-07-15 22:48:42.446095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.138 qpair failed and we were unable to recover it. 00:24:59.138 [2024-07-15 22:48:42.446265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.446292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.446448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.446475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.446645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.446673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.446873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.446908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.447075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.447101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.447281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.447308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.447504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.447531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.447731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.447757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.447924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.447952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.448130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.448156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.448330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.448357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.448528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.448555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.448724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.448750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.448954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.448982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.449184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.449211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.449391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.449418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.449582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.449609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.449777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.449804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.449997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.450025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.450195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.450222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.450399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.450426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.450580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.450607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.450785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.450813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.451022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.451049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.451219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.451247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.451423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.451450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.451649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.451676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.451883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.451915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.452055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.452082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.455056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.455097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.455308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.455338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.455519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.455547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.455750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.455777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.455932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.455960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.456138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.456165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.456347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.456374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.456552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.456580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.456753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.456780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.457006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.457034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.457245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.457273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.457451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.139 [2024-07-15 22:48:42.457478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.139 qpair failed and we were unable to recover it. 00:24:59.139 [2024-07-15 22:48:42.457641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.457668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.457847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.457886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.458071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.458098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.458273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.458300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.458472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.458499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.458679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.458706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.458860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.458905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.459086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.459114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.459330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.459357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.459526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.459553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.459768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.459795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.460001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.460029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.460230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.460256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.460449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.460479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.460670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.460698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.460863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.460911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.461146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.461184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.461364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.461391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.461608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.461650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.461843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.461900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.462056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.462083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.462270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.462296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.462526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.462553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.462763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.462789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.462933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.462961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.463114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.463142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.463327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.463354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.463570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.463596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.463806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.463833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.463988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.464016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.464164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.464191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.464259] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:24:59.140 [2024-07-15 22:48:42.464332] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.140 [2024-07-15 22:48:42.464399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.464425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.464618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.464642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.464782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.464809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.464997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.465024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.465193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.465219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.465394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.465420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.465609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.465635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.465835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.465861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.466049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.140 [2024-07-15 22:48:42.466076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.140 qpair failed and we were unable to recover it. 00:24:59.140 [2024-07-15 22:48:42.466250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.466283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.466492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.466517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.466690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.466714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.466924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.466950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.467154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.467179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.467356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.467380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.467583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.467608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.467816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.467842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.468068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.468094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.468255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.468279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.468467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.468492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.468668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.468693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.468898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.468928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.469102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.469130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.469341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.469366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.469547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.469572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.469752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.469776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.469955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.469989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.470193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.470218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.470404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.470429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.470567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.470591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.470775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.470800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.471009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.471035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.471183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.471207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.471369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.471394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.471573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.471598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.471784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.471809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.471978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.472003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.472150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.472174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.472358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.472382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.472585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.472609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.472788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.472813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.473006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.473031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.473206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.473231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.473437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.473461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.473639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.473663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.141 [2024-07-15 22:48:42.473833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.141 [2024-07-15 22:48:42.473858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.141 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.474049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.474074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.474283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.474309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.476890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.476927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.477138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.477165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.477371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.477397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.477581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.477607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.477784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.477810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.477983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.478010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.478193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.478220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.478396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.478423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.478602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.478630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.478815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.478843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.479019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.479048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.479235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.479265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.479458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.479486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.479714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.479744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.479945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.479974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.480171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.480198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.480382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.480409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.480806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.480832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.481067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.481096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.481358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.481383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.481598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.481625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.481777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.481803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.481987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.482013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.482221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.482247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.482457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.482483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.482691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.482716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.482886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.482912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.484888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.484915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.485104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.485129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.485348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.485375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.485585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.485611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.485825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.485866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.486099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.486126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.486338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.486379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.486598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.486637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.486833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.486859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.487030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.487055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.487237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.487263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.487459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.487484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.142 qpair failed and we were unable to recover it. 00:24:59.142 [2024-07-15 22:48:42.487675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.142 [2024-07-15 22:48:42.487700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.487853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.487896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.488060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.488090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.488267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.488292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.488476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.488502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.488683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.488708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.488919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.488945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.489888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.489914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.490130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.490156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.490310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.490336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.490545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.490571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.490726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.490752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.490899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.490925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.491110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.491136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.491343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.491368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.491552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.491577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.493888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.493916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.494136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.494162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.494348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.494373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.494560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.494586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.494769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.494795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.495002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.495028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.495216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.495241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.495449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.495475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.495630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.495655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.495812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.495838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.496020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.496056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.496284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.496319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.496551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.496586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.496785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.496823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.497036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.497073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.497302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.497338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.497563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.497598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.497825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.497860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.498050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.498084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.498278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.498313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.498513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.498548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.498771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.498805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.499022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.499057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.499260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.499294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.499516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.499550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.499753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.143 [2024-07-15 22:48:42.499786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.143 qpair failed and we were unable to recover it. 00:24:59.143 [2024-07-15 22:48:42.499991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.500026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.500234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.500269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.500484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.500532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.500748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.500783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.501009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.501045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.501261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.501307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.501607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.501640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.501941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.501976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.502164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.502211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.502509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.502541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.502773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.502808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.503008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.503042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.503248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.503283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.503478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.503525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.503740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.503776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.504051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.504085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.504345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.504379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.504601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.504648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.504942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.504976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.505192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.505226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.505417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.505452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.505761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.505804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.506106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.506141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.506342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.506392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.506638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.506671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.506960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.506995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.507206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.507241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.507629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.507676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.507933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.507973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.508177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.508213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.508434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.508480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.508730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.508765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.509041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.509076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.509322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.509355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.509526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.509559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.509776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.509808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.510090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.510125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.510348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.510396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.510614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.510648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.510851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.510896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.511126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.511162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.144 [2024-07-15 22:48:42.511376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.144 [2024-07-15 22:48:42.511425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.144 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.511724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.511759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.512007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.512044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.512255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.512303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.512517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.512550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.512762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.512796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.512981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.513017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.513260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.513294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.513530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.513566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.513788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.513822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.514038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.514073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.514369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.514417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.514650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.514684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.514917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.514952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.515154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.515193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.515367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.515404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.515730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.515790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.516027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.516062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.516362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.516395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.516694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.516729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.516988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.517022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.517236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.517269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.517548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.517581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.517832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.517873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.518073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.518105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.518350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.518384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.518564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.518601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.518841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.518896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.519109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.519145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.519387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.519427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.519640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.519677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.519874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.519949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.520204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.520254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.520462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.520501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.520766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.520806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.521053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.521097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.521341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.145 [2024-07-15 22:48:42.521380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.145 qpair failed and we were unable to recover it. 00:24:59.145 [2024-07-15 22:48:42.521612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.521651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.521934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.521977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.522268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.522318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.522562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.522598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.522913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.522963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.523224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.523262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.523412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.523440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.523608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.523636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.523814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.523841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.524038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.524066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.524255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.524281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.524482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.524508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.524709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.524735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.524919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.524947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.525195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.525222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.525434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.525460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.525624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.525658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.525847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.525888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.526064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.526095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.526294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.526321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.526499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.526525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.526721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.526747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.526904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.526932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.527104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.527131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.527310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.527336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.527516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.527542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.527711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.527737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.527895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.527923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.528088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.528117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.528302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.528328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.528481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.528508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.528645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.528687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.528883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.528911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.529091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.529118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.529309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.529335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.529544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.529571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.529707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.529733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.529941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.529968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.530146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.530183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.530363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.530390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.530544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.530570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.146 [2024-07-15 22:48:42.530748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.146 [2024-07-15 22:48:42.530775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.146 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.530928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.530956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.531110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.531137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.531284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.531312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.531517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.531543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.531708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.531735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.531938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.531972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.532134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.532162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.532302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.532329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.532468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.532509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.532718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.532744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.532909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.532936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.533095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.533122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.533312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.533338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.533549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.533576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.533722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.533748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.533924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.533952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.534127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.534154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.534335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.534366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.534529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.534556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.534734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.534761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.534964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.534992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.535142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.535178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.535324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.535351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.535528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.535554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.535736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.535774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.535926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.535953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.536126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.536153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.536328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.536355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.536531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.536559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.536763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.536789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.536933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.536961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.537166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.537198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.537375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.537402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.537581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.537608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.537752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.537778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.537957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.538000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.538152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.538193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.538352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.538379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.538558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.538585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.538764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.538791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.538980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.147 [2024-07-15 22:48:42.539007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.147 qpair failed and we were unable to recover it. 00:24:59.147 [2024-07-15 22:48:42.539191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.539220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.539395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.539422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.539563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.539590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.539737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.539769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.539978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.540006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.540154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.540191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.540338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.540365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.540561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.540587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.540792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.540818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.541033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.541061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.541218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.541244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.541433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.541460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.541633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.541660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.541800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.541827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.542013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.542041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.542209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.542247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.542448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.542475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.542667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.542694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.542872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.542909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.543082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.543110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.543282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.543309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.543457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.543484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.543685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.543712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.543892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.543920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.544066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.544093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.544271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.544298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.544488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.544514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.544689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.544715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.544862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.544914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.545056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.545082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.545233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.545264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.545440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.545467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.545627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.545654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.545829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.545856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.546059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.546100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.546283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.546312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.546499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.546527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.546677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.546706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.546909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.546939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.547689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.547722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.547923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.547952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.148 qpair failed and we were unable to recover it. 00:24:59.148 [2024-07-15 22:48:42.548130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.148 [2024-07-15 22:48:42.548158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.548373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.548400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.548543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.548571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.548782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.548810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.549018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.549047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.549506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.549541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.549756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.549784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.549967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.549996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.550146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.550186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.550988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.551019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.551201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.551229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.551420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.551447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.551624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.551652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.552152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.552186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.552402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.552430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.552599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.552633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.552773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.552804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.552986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.553015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.553199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.553226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.553410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.553437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.553619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.553648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.553751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.149 [2024-07-15 22:48:42.553827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.553855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.554072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.554113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.554313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.554341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.554526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.554554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.554742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.554771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.554927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.554954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.555129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.555155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.555341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.555377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.555520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.555551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.555752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.555778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.555969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.555996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.556151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.556177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.556357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.556383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.556562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.556588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.556750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.556776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.556929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.556956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.557135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.557162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.557353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.557379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.557554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.557581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.557753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.149 [2024-07-15 22:48:42.557780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.149 qpair failed and we were unable to recover it. 00:24:59.149 [2024-07-15 22:48:42.557923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.557951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.558130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.558156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.558339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.558365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.558543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.558570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.558744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.558771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.558959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.558986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.559171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.559208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.559367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.559393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.559569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.559595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.559780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.559806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.559994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.560021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.560194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.560219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.560401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.560435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.560625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.560652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.560809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.560836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.561036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.561065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.561372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.561398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.561676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.561702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.561890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.561920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.562103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.562129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.562316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.562342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.562542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.562569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.562726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.562753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.562953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.562980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.563119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.563145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.563338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.563365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.563541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.563568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.563777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.563803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.564003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.564034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.564209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.564246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.564422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.564448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.564629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.564655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.564873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.564904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.565051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.565077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.565297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.565324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.565469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.150 [2024-07-15 22:48:42.565496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.150 qpair failed and we were unable to recover it. 00:24:59.150 [2024-07-15 22:48:42.565670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.565706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.565867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.565900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.566046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.566072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.566248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.566274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.566456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.566483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.566643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.566669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.566847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.566874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.567029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.567057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.567209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.567247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.567418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.567445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.567594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.567621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.567810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.567836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.568006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.568032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.568217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.568244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.568434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.568460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.568604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.568630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.568835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.568873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.569062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.569088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.569237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.569263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.569442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.569476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.569681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.569707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.569858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.569923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.570110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.570137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.570307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.570333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.570516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.570542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.570724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.570751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.570894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.570922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.571087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.571113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.571290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.571317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.571462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.571487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.571667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.571693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.571874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.571917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.572094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.572125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.572272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.572298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.572480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.572507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.572664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.572690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.572826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.572852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.572998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.573026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.573173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.573205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.573362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.573388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.573560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.151 [2024-07-15 22:48:42.573587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.151 qpair failed and we were unable to recover it. 00:24:59.151 [2024-07-15 22:48:42.573760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.573789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.573941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.573967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.574116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.574142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.574366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.574392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.574570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.574597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.574781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.574808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.574974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.575001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.575191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.575218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.575390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.575417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.575599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.575626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.575802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.575828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.575985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.576012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.576192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.576219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.576370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.576396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.576570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.576596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.576783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.576810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.576997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.577024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.577199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.577225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.577445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.577472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.577623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.577651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.577840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.577867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.578062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.578089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.578273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.578300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.578442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.578468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.578642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.578668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.578848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.578874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.579071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.579098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.579275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.579301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.579472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.579509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.579652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.579678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.579850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.579911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.580083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.580113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.580330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.580356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.580502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.580528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.580734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.580761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.580938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.580964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.581121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.581147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.581300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.581328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.581508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.581534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.581714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.581741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.581895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.152 [2024-07-15 22:48:42.581923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.152 qpair failed and we were unable to recover it. 00:24:59.152 [2024-07-15 22:48:42.582100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.582126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.582275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.582302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.582447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.582474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.582653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.582680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.582864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.582907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.583077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.583103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.583288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.583315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.583498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.583524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.583701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.583728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.583872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.583904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.584119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.584146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.584345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.584371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.584544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.584571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.584749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.584775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.584987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.585014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.585155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.585182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.585328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.585355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.585558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.585585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.585734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.585761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.585950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.585977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.586150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.586188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.586340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.586366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.586554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.586580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.586792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.586819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.587005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.587032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.587181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.587208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.587409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.587436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.587583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.587620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.587772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.587799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.588020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.588047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.588227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.588266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.588435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.588461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.588610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.588643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.588816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.588842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.589022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.589049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.589209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.589235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.589383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.589410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.589585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.589620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.589805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.589831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.589995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.590021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.590208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.153 [2024-07-15 22:48:42.590235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.153 qpair failed and we were unable to recover it. 00:24:59.153 [2024-07-15 22:48:42.590444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.154 [2024-07-15 22:48:42.590470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.154 qpair failed and we were unable to recover it. 00:24:59.154 [2024-07-15 22:48:42.590621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.154 [2024-07-15 22:48:42.590649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.154 qpair failed and we were unable to recover it. 00:24:59.154 [2024-07-15 22:48:42.590823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.154 [2024-07-15 22:48:42.590850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.154 qpair failed and we were unable to recover it. 00:24:59.154 [2024-07-15 22:48:42.591039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.154 [2024-07-15 22:48:42.591066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.154 qpair failed and we were unable to recover it. 00:24:59.154 [2024-07-15 22:48:42.591207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.154 [2024-07-15 22:48:42.591245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.154 qpair failed and we were unable to recover it. 00:24:59.154 [2024-07-15 22:48:42.591392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.154 [2024-07-15 22:48:42.591430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.154 qpair failed and we were unable to recover it. 00:24:59.154 [2024-07-15 22:48:42.591617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.591644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.591818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.591845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.592002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.592030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.592207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.592234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.592404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.592430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.592635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.592661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.592843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.592869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.593051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.593078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.593250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.593276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.593462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.593489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.593667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.593708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.593883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.593911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.594091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.594119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.594311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.594337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.594507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.594534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.594709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.594736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.594903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.594931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.595110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.595137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.595330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.595357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.595531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.595558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.595711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.595738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.595925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.595955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.596129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.596155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.596341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.596372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.596557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-15 22:48:42.596584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-15 22:48:42.596799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.596825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.597016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.597043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.597248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.597274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.597462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.597489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.597679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.597706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.597909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.597936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.598088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.598114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.598323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.598349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.598552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.598579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.598774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.598801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.598996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.599023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.599169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.599201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.599389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.599415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.599567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.599593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.599799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.599826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.600013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.600041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.600218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.600243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.600406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.600432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.600605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.600643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.600847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.600887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.601071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.601098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.601270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.601297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.601497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.601523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.601671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.601700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.601944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.601984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.602171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.602205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.602385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.602414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.602587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.602625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.602771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.602798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.602975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.603003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.603185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.603211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.603389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.603427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.603621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.603648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.603823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.603850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.604040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.604067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.604208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.604234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.604385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.604412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.604584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.604611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.604758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.604789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.604977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.605005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-15 22:48:42.605191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-15 22:48:42.605218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.605395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.605421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.605575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.605604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.605794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.605842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.606044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.606073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.606250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.606277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.606452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.606491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.606668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.606694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.606895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.606922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.607093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.607120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.607274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.607301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.607474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.607500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.607684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.607710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.607907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.607934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.608078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.608104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.608281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.608307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.608477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.608503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.608681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.608709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.608853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.608920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.609098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.609124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.609294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.609321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.609494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.609520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.609689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.609714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.609862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.609901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.610073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.610098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.610285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.610311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.610499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.610527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.610703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.610730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.610904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.610930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.611080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.611107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.611245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.611282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.611436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.611463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.611648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.611674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.611818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.611846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.612020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.612047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.612214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.612239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.612417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.612444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.612623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.612649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.612794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.612825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.613016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.613043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.613189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.613215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-15 22:48:42.613522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-15 22:48:42.613549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.613773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.613799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.613963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.613989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.614162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.614188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.614336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.614362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.614577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.614603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.614800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.614826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.614983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.615010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.615199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.615225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.615381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.615407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.615583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.615608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.615786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.615812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.615997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.616023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.616179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.616205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.616344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.616370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.616550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.616576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.616748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.616774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.616918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.616945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.617089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.617115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.617313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.617340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.617512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.617538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.617746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.617771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.617978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.618004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.618156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.618194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.618384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.618416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.618588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.618615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.618785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.618811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.618970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.618997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.619147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.619173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.619321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.619350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.619492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.619518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.619670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.619695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.619838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.619864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.620020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.620045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.620218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.620254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.620452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.620479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.620627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.620653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.620826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.620856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.621018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.621044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.621222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.621254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.621401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.621428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-15 22:48:42.621591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-15 22:48:42.621617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.621793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.621819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.622011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.622037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.622240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.622266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.622442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.622468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.622653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.622681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.622859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.622898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.623075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.623101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.623272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.623298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.623439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.623466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.623673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.623699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.623925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.623952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.624103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.624129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.624305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.624331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.624506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.624533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.624680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.624705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.624848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.624874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.625065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.625092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.625262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.625288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.625444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.625469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.625670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.625695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.625947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.625974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.626149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.626177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.626396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.626427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.626597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.626623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.626777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.626802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.626966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.626993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.627146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.627174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.627335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.627362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.627541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.627567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.627710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.627736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.627948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.627974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.628115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.628140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-15 22:48:42.628295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-15 22:48:42.628321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.628522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.628547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.628724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.628750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.628952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.628978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.629130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.629158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.629307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.629333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.629533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.629559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.629728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.629755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.629902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.629930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.630115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.630141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.630318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.630345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.630494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.630521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.630662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.630687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.630859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.630898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.631056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.631082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.631295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.631322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.631487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.631513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.631694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.631721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.631903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.631930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.632103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.632130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.632287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.632329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.632495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.632521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.632676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.632704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.632926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.632971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.633124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.633152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.633319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.633347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.633529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.633556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.633707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.633734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.633935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.633962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.634132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.634159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.634348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.634381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.634539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.634566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.634719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.634747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.634963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.634990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.635147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.635174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.635337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.635364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.635517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.635544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.635694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.635721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.635930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.635957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.636135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.636161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.636360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-15 22:48:42.636387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-15 22:48:42.636564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.636590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.636768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.636793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.636998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.637025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.637202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.637228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.637400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.637426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.637625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.637651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.637829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.637856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.638010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.638036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.638206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.638232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.638431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.638457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.638604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.638630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.638814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.638841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.639017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.639043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.639214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.639241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.639426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.639452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.639620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.639647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.639823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.639850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.640000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.640026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.640200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.640227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.640398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.640425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.640626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.640652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.640802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.640828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.641020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.641046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.641190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.641217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.641391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.641417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.641560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.641587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.641760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.641786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.641957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.641984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.642165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.642192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.642341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.642371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.642547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.642574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.642782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.642808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.642956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.642983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.643128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.643153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.643357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.643383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.643605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.643631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.643777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.643804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.643985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.644013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.644192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.644218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.644404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.644430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.644616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.445 [2024-07-15 22:48:42.644643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.445 qpair failed and we were unable to recover it. 00:24:59.445 [2024-07-15 22:48:42.644820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.644846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.645037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.645064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.645242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.645269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.645475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.645501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.645676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.645702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.645875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.645906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.646091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.646117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.646336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.646362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.646539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.646567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.646768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.646795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.646967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.646993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.647146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.647172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.647348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.647374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.647577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.647602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.647742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.647768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.647961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.647987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.648163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.648189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.648362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.648387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.648533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.648558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.648744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.648770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.648927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.648953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.649131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.649156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.649302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.649327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.649475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.649500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.649646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.649671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.649844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.649870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.650043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.650069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.650258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.650283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.650428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.650457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.650655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.650681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.650836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.650862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.651009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.651034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.651206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.651232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.651385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.651412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.651579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.651604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.651780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.651806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.651950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.651976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.652200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.652225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.652402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.652427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.652595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.652620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.652797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.652823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.446 qpair failed and we were unable to recover it. 00:24:59.446 [2024-07-15 22:48:42.653002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.446 [2024-07-15 22:48:42.653029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.653203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.653229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.653407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.653432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.653608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.653633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.653783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.653808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.653986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.654012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.654163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.654189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.654354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.654380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.654556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.654581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.654753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.654779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.654922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.654948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.655101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.655127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.655280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.655305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.655514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.655540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.655748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.655774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.655957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.655983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.656180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.656206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.656382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.656407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.656570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.656595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.656794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.656820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.657032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.657058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.657228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.657254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.657423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.657449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.657622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.657647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.657846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.657872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.658047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.658073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.658324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.658349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.658546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.658576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.658725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.658752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.658929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.658956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.659099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.659124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.659293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.659318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.659468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.659495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.659644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.659670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.659886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.447 [2024-07-15 22:48:42.659913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.447 qpair failed and we were unable to recover it. 00:24:59.447 [2024-07-15 22:48:42.660058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.660085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.660259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.660286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.660463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.660491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.660674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.660700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.660900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.660927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.661065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.661091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.661240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.661265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.661434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.661460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.661634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.661660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.661824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.661849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.662031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.662057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.662226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.662251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.662391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.662417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.662614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.662640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.662780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.662805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.662945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.662971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.663153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.663178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.663363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.663388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.663600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.663625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.663808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.663835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.664020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.664045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.664256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.664282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.664427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.664453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.664635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.664660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.664808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.664834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.664998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.665024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.665226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.665253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.665430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.665463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.665676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.665701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.665852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.665883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.666089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.666115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.666266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.666303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.666477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.666507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.666657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.666684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.666857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.666889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.667050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.667076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.667263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.667290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.667443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.667476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.667645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.667681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.667873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.667904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.668056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.448 [2024-07-15 22:48:42.668082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.448 qpair failed and we were unable to recover it. 00:24:59.448 [2024-07-15 22:48:42.668226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.668254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.668434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.668459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.668665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.668692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.668917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.668944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.669118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.669144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.669311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.669337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.669532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.669557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.669735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.669763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.669966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.669993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.670165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.670191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.670337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.670362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.670555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.670580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.670759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.670786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.670935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.670962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.671166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.671191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.671369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.671395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.671575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.671601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.671780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.671807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.671988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.672014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.672197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.672223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.672403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.672429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.672602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.672629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.672793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.672830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.672990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.673016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.673162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.673196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.673337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.673362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.673513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.673539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.673680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.673707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.673852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.673883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.674038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.674063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.674191] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.449 [2024-07-15 22:48:42.674231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.449 [2024-07-15 22:48:42.674236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.674253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.449 [2024-07-15 22:48:42.674260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 [2024-07-15 22:48:42.674271] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.674282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.449 [2024-07-15 22:48:42.674417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.674445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.674637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.674683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 [2024-07-15 22:48:42.674636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.674713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:59.449 [2024-07-15 22:48:42.674685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:59.449 [2024-07-15 22:48:42.674716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:59.449 [2024-07-15 22:48:42.674854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.674885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.675067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.675092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.675235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.675261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.675404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.675430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.449 [2024-07-15 22:48:42.675597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.449 [2024-07-15 22:48:42.675623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.449 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.675805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.675843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.676066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.676093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.676297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.676323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.676509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.676539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.676697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.676724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.676884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.676918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.677144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.677171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.677346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.677372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.677549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.677575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.677719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.677745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.677913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.677941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.678089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.678122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.678317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.678346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.678622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.678648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.678791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.678817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.678962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.678989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.679134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.679161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.679327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.679357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.679534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.679561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.679734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.679760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.679934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.679960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.680117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.680144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.680354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.680386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.680572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.680598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.680790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.680818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.680969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.681006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.681169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.681194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.681376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.681401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.681549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.681576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.681755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.681781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.682020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.682047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.682223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.682249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.682395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.682432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.682588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.682614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.682770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.682798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.682945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.682973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.683131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.683159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.683433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.683459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.683663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.683689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.683884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.683913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.450 qpair failed and we were unable to recover it. 00:24:59.450 [2024-07-15 22:48:42.684095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.450 [2024-07-15 22:48:42.684122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.684318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.684345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.684525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.684551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.684700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.684726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.684889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.684917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.685220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.685256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.685428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.685455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.685604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.685630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.685808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.685835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.685995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.686023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.686195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.686221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.686383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.686410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.686584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.686610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.686779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.686805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.686951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.686978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.687171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.687198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.687385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.687411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.687584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.687612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.687779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.687814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.687967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.687994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.688139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.688164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.688348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.688375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.688519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.688545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.688697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.688723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.688921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.688949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.689111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.689139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.689299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.689325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.689473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.689503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.689672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.689698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.689838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.689863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.690020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.690056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.690243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.690269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.690431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.690462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.690643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.690670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.690818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.690843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.690993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.691020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.691171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.691197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.691368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.691394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.691564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.691590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.691742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.691778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.691972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.691999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.451 [2024-07-15 22:48:42.692138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.451 [2024-07-15 22:48:42.692164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.451 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.692366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.692391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.692536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.692563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.692705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.692731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.692899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.692925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.693194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.693219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.693397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.693424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.693579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.693616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.693843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.693869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.694013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.694039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.694185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.694211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.694355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.694381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.694548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.694574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.694712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.694745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.694901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.694928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.695093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.695120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.695399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.695425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.695597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.695623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.695768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.695795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.695990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.696016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.696158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.696186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.696322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.696354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.696535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.696561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.696736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.696762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.696914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.696942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.697118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.697144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.697336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.697361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.697504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.697529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.697679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.697706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.697928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.697960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.698133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.698158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.698320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.698346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.698514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.698539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.698711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.698737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.698888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.698914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.699056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.452 [2024-07-15 22:48:42.699082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.452 qpair failed and we were unable to recover it. 00:24:59.452 [2024-07-15 22:48:42.699303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.699333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.699497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.699524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.699669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.699695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.699837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.699864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.700025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.700051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.700212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.700237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.700440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.700466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.700616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.700643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.700835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.700862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.701064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.701090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.701277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.701303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.701446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.701472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.701622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.701649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.701821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.701847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.701992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.702018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.702205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.702232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.702383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.702410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.702559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.702586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.702762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.702787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.702927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.702954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.703101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.703128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.703333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.703359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.703498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.703524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.703673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.703700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.703856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.703890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.704078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.704104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.704256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.704281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.704424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.704449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.704596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.704623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.704759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.704785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.704934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.704960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.705150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.705177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.705352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.705378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.705518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.705549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.705700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.705725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.705866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.705900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.706104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.706130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.706280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.706306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.706611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.706648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.706832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.706858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.453 [2024-07-15 22:48:42.707059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.453 [2024-07-15 22:48:42.707101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.453 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.707256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.707284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.707461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.707488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.707666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.707692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.707847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.707873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.708030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.708056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.708217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.708243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.708387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.708413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.708630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.708656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.708825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.708851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.709033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.709059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.709196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.709222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.709376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.709402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.709554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.709580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.709755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.709780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.709930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.709957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.710111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.710137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.710291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.710318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.710458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.710484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.710664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.710691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.710843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.710869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.711049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.711075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.711344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.711370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.711539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.711565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.711715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.711741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.711999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.712026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.712193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.712219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.712369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.712395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.712540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.712566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.712754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.712780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.712933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.712960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.713140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.713167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.713319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.713345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.713482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.713513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.713701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.713727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.713883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.713909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.714081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.714107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.714287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.714313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.714468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.714494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.714754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.714780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.714933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.714961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.454 qpair failed and we were unable to recover it. 00:24:59.454 [2024-07-15 22:48:42.715113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.454 [2024-07-15 22:48:42.715140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.715314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.715340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.715483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.715509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.715653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.715679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.715855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.715887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.716040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.716066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.716238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.716264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.716408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.716434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.716599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.716625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.716766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.716792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.716943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.716970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.717124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.717150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.717293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.717319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.717501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.717526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.717702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.717728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.717883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.717910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.718052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.718078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.718256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.718282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.718420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.718446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.718616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.718641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.718809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.718835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.718996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.719022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.719200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.719226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.719377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.719403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.719538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.719563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.719729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.719755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.719904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.719931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.720093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.720118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.720295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.720321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.720492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.720518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.720687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.720712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.720863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.720894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.721039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.721069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.721217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.721243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.721444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.721470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.721619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.721645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.721797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.721824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.721978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.722005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.722175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.722201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.722350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.722377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.722522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.722548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.455 qpair failed and we were unable to recover it. 00:24:59.455 [2024-07-15 22:48:42.722706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.455 [2024-07-15 22:48:42.722731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.722909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.722935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.723080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.723105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.723251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.723277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.723423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.723450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.723612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.723640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.723815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.723841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.723998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.724025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.724200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.724227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.724399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.724425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.724570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.724597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.724771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.724797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.724962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.724990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.725145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.725171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.725339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.725365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.725519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.725545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.725714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.725740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.725892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.725918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.726123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.726150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.726336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.726363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.726556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.726582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.726758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.726784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.726926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.726953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.727095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.727121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.727282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.727309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.727461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.727487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.727654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.727680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.727828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.727854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.728006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.728034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.728222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.728249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.728420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.728446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.728589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.728620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.728792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.728819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.728973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.728999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.729172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.456 [2024-07-15 22:48:42.729198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.456 qpair failed and we were unable to recover it. 00:24:59.456 [2024-07-15 22:48:42.729345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.729371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.729563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.729590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.729760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.729786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.729961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.729987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.730149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.730177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.730348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.730374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.730522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.730548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.730722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.730748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.730895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.730922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.731062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.731088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.731237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.731263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.731454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.731479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.731620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.731647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.731820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.731846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.732056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.732083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.732265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.732291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.732471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.732497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.732652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.732678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.732822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.732847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.733061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.733088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.733276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.733302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.733481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.733507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.733679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.733704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.733898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.733925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.734079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.734105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.734275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.734301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.734439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.734465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.734610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.734636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.734778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.734804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.734951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.734979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.735131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.735156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.735331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.735357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.735526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.735552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.735694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.735721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.735903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.735930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.736103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.736129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.736297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.736332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.736519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.736545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.736716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.736742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.736894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.736921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.737067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.457 [2024-07-15 22:48:42.737094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.457 qpair failed and we were unable to recover it. 00:24:59.457 [2024-07-15 22:48:42.737265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.737291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.737471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.737498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.737638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.737664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.737808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.737834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.738017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.738043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.738205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.738232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.738403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.738429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.738572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.738598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.738746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.738772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.738929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.738956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.739111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.739139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.739338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.739364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.739503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.739529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.739734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.739778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.739934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.739972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.740143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.740169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.740349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.740375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.740548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.740574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.740716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.740742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.740905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.740932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.741157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.741194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.741409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.741436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.741606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.741633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.741896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.741923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.742067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.742093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.742229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.742254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.742396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.742423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.742571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.742596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.742746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.742774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.742928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.742956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.743123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.743149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.743294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.743322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.743471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.743496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.743761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.743787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.743970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.743996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.744152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.744183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.744353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.744382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.744564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.744591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.744743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.744769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.744951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.744977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.458 [2024-07-15 22:48:42.745126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.458 [2024-07-15 22:48:42.745152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.458 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.745319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.745345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.745524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.745549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.745737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.745765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.745919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.745946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.746102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.746129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.746309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.746335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.746481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.746506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.746696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.746722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.746887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.746913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.747089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.747118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.747381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.747409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.747605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.747640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.747824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.747850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.748055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.748081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.748221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.748247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.748428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.748454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.748600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.748626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.748795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.748822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.748974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.749001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.749147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.749174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.749352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.749378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.749650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.749676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.749843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.749869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.750037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.750063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.750213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.750241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.750410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.750437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.750596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.750622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.750809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.750836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.750991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.751018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.751178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.751205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.751365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.751392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.751559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.751585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.751743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.751780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.751968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.751996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.752147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.752178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.752321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.752347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.752518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.752543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.752717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.752743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.752914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.752940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.753181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.753217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.459 [2024-07-15 22:48:42.753371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.459 [2024-07-15 22:48:42.753398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.459 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.753576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.753601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.753780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.753806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.753966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.753992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.754168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.754194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.754338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.754364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.754523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.754549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.754754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.754781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.754941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.754973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.755128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.755154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.755330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.755357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.755513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.755540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.755713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.755738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.755902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.755940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.756095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.756122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.756296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.756322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.756511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.756538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.756716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.756742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.756935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.756961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.757105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.757132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.757277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.757303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.757498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.757528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.757719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.757746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.757912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.757940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.758080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.758107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.758274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.758300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.758457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.758483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.758783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.758819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.759127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.759154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.759301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.759328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.759600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.759626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.759777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.759802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.759954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.759980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.760155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.760181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.760354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.760388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.760577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.760604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.460 qpair failed and we were unable to recover it. 00:24:59.460 [2024-07-15 22:48:42.760751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.460 [2024-07-15 22:48:42.760778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.760921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.760947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.761119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.761145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.761283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.761309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.761447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.761473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.761647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.761672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.761828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.761855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.762054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.762082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.762252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.762278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.762415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.762440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.762594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.762620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.762798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.762825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.762988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.763015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.763159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.763185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.763334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.763363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.763543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.763571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.763746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.763772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.763917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.763944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.764113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.764139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.764284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.764311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.764477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.764503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.764673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.764707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.764866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.764899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.765070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.765095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.765263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.765289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.765442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.765468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.765618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.765644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.765793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.765819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.765962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.765989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.766142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.766177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.766338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.766366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.766529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.766555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.766718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.766744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.766914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.766941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.767080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.767107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.767261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.767287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.767475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.767501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.767654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.767687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.767852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.767889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.768070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.768097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.768290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.461 [2024-07-15 22:48:42.768317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.461 qpair failed and we were unable to recover it. 00:24:59.461 [2024-07-15 22:48:42.768467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.768493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.768676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.768702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.768872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.768903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.769045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.769071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.769234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.769270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.769449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.769476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.769638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.769663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.769817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.769844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.769997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.770023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.770210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.770236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.770385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.770411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.770597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.770623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.770778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.770805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.771002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.771029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.771171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.771197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.771342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.771367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.771514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.771539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.771681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.771707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.771850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.771881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.772062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.772090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.772256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.772283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.772463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.772498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.772647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.772672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.772819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.772845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.772995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.773022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.773158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.773184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.773331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.773356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.773509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.773535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.773719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.773748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.773924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.773958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.774096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.774122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.774299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.774325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.774498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.774524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.774667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.774693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.774835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.774861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.775044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.775070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.775283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.775310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.775456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.775486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.775638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.775664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.775854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.775887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.776030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.462 [2024-07-15 22:48:42.776056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.462 qpair failed and we were unable to recover it. 00:24:59.462 [2024-07-15 22:48:42.776228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.776254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.776427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.776453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.776642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.776669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.776819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.776846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.777027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.777054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.777192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.777218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.777361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.777387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.777536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.777561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.777770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.777805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.777990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.778017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.778176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.778203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.778355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.778380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.778520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.778546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.778695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.778721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.778900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.778927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.779074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.779110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.779257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.779283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.779417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.779443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.779616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.779642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.779812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.779838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.780037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.780065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.780265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.780291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.780442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.780468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.780644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.780670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.780824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.780850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.781035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.781062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.781231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.781268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.781450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.781476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.781625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.781650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.781860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.781895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.782068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.782095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.782275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.782302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.782446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.782472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.782632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.782659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.782820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.782847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.783030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.783056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.783262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.783288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.783458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.783484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.783660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.783686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.783832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.783858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.784013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.463 [2024-07-15 22:48:42.784041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.463 qpair failed and we were unable to recover it. 00:24:59.463 [2024-07-15 22:48:42.784256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.784283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.784449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.784475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.784650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.784675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.784827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.784854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.785009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.785045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.785214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.785240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.785416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.785441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.785597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.785623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.785829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.785860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.786053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.786084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.786263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.786290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.786464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.786490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.786677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.786702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.786895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.786922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.787112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.787139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.787342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.787368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.787517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.787543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.787725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.787750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.787939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.787976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.788120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.788146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.788309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.788336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.788506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.788532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.788689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.788719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.788885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.788912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.789073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.789100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.789273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.789299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.789470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.789495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.789669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.789696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.789872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.789905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.790072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.790098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.790251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.790277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.790444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.790470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.790655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.790681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.790827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.790858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.791049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.791075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.791217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.791243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.791399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.791425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.464 qpair failed and we were unable to recover it. 00:24:59.464 [2024-07-15 22:48:42.791562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.464 [2024-07-15 22:48:42.791587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.791735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.791762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.791913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.791940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.792100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.792125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.792296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.792321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.792492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.792518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.792706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.792731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.792886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.792923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.793080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.793107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.793276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.793302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.793488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.793514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.793656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.793683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.793864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.793896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.794064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.794092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.794277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.794302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.794478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.794510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.794679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.794706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.794884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.794911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.795062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.795088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.795283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.795308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.795459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.795484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.795621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.795655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.795821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.795849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.796076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.796104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.796284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.796310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.796445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.796476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.796638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.796664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.796805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.796831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.796987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.797013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.797183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.797210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.797362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.797389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.797551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.797577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.797753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.797779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.797943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.797970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.798144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.798169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.798359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.798385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.798555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.798581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.798744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.798771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.465 [2024-07-15 22:48:42.798959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.465 [2024-07-15 22:48:42.798986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.465 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.799140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.799169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.799343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.799369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.799514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.799540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.799691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.799716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.799857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.799888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.800060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.800086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.800234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.800261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.800444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.800470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.800618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.800643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.800802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.800828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.800973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.801000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.801139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.801166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.801335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.801361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.801513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.801539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.801709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.801745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.801940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.801967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.802149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.802175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.802324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.802350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.802489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.802515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.802655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.802681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.802851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.802882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.803058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.803084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.803257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.803284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.803486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.803513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.803690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.803716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.803888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.803915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.804069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.804104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.804305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.804338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.804501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.804527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.804673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.804698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.804882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.804909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.805046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.805072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.805264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.805291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.805429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.805454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.805607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.805633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.805811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.805838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.806001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.806027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.806189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.806215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.806391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.806417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.806588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.806613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.466 qpair failed and we were unable to recover it. 00:24:59.466 [2024-07-15 22:48:42.806785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.466 [2024-07-15 22:48:42.806812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.807000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.807028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.807180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.807206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.807380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.807406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.807583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.807609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.807751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.807777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.807924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.807951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.808097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.808123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.808313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.808340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.808519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.808546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.808726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.808753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.808921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.808969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.809109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.809135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.809284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.809310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.809465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.809490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.809652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.809688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.809862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.809915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.810081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.810108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.810310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.810336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.810483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.810509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.810669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.810695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.810867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.810899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.811043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.811070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.811219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.811256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.811451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.811478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.811614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.811640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.811812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.811843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.812005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.812033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.812185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.812211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.812360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.812387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.812531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.812557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.812731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.812759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.812928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.812955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.813098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.813124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.813283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.813309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.813466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.813492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.813640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.813666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.813833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.813859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.814009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.814035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.814177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.814205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.814388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.814414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.467 qpair failed and we were unable to recover it. 00:24:59.467 [2024-07-15 22:48:42.814599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.467 [2024-07-15 22:48:42.814626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.814765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.814791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.814937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.814963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.815130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.815156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.815296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.815323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.815517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.815542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.815704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.815732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.815875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.815906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.816084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.816111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.816258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.816283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.816420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.816446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.816617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.816643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.816789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.816815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.816959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.816986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.817165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.817192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.817340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.817366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.817534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.817561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.817719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.817744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.817892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.817919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.818097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.818123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.818279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.818305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.818484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.818509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.818693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.818721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.818913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.818941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.819090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.819116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.819260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.819290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.819457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.819482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.819632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.819659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.819848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.819874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.820029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.820056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.820206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.820233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.820422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.820448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.820606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.820631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.820803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.820829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.820966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.820992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.821128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.821154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.821322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.821348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.821534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.821569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.821734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.821760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.821926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.821954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.822107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.468 [2024-07-15 22:48:42.822132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.468 qpair failed and we were unable to recover it. 00:24:59.468 [2024-07-15 22:48:42.822274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.822300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.822449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.822474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.822642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.822668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.822815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.822843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.823022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.823049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.823197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.823223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.823383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.823409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.823560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.823586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.823738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.823765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.823914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.823941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.824088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.824114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.824263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.824288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.824425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.824450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.824618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.824644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.824784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.824812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.824994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.825021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.825161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.825187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.825346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.825373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.825525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.825551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.825721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.825747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.825896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.825923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.826060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.826086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.826258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.826294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.826458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.826483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.826628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.826659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.826832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.826858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.827013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.827040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.827202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.827229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.827402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.827428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.827611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.827637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.827822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.827848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.828005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.828031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.828205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.828231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.828387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.828413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.828568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.828593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.469 qpair failed and we were unable to recover it. 00:24:59.469 [2024-07-15 22:48:42.828730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.469 [2024-07-15 22:48:42.828756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.828926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.828952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.829112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.829137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.829298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.829324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.829498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.829523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.829661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.829687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.829841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.829867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.830014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.830040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.830188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.830214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.830382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.830407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.830546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.830572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.830739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.830765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.830942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.830968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.831136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.831161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.831309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.831335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.831500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.831526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.831670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.831696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.831834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.831861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.832007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.832033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.832210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.832236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.832405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.832431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.832571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.832596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.832735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.832761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.832950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.832976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.833146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.833172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.833361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.833387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.833538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.833563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.833727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.833753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.833918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.833945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.834118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.834149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.834323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.834348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.834527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.834554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.834728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.834754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.834926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.834953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.835126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.835152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.835309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.835335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.835506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.835531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.835679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.835705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.835887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.835913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.836058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.836084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.836266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.836291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.470 qpair failed and we were unable to recover it. 00:24:59.470 [2024-07-15 22:48:42.836426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.470 [2024-07-15 22:48:42.836452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.836598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.836623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.836774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.836801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.836954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.836980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.837121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.837146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.837305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.837331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.837534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.837559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.837702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.837728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.837868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.837899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.838098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.838123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.838288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.838314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.838453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.838479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.838661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.838687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.838828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.838853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.839012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.839039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.839190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.839216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.839364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.839389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.839540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.839566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.839712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.839738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.839888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.839916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.840095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.840121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.840268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.840294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.840430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.840456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.840604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.840630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.840778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.840804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.840963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.840989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.841161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.841186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.841346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.841372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.841517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.841547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.841695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.841721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.841861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.841892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.842072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.842097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.842241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.842267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.842413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.842440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.842610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.842635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.842813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.842839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.842991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.843018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.843156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.843182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.843352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.843378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.843551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.843577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.843743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.471 [2024-07-15 22:48:42.843769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.471 qpair failed and we were unable to recover it. 00:24:59.471 [2024-07-15 22:48:42.843940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.843967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.844125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.844152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.844325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.844351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.844520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.844546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.844699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.844725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.844897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.844924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.845068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.845093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.845262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.845288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.845431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.845457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.845599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.845626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.845771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.845797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.845999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.846026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.846170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.846195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.846355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.846381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.846583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.846609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.846747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.846774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.846927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.846954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.847121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.847147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.847293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.847319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.847465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.847491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.847647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.847673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.847815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.847840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.848001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.848028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.848199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.848225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.848366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.848392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.848546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.848572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.848717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.848744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.848915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.848946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.849112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.849138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.849295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.849321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.849489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.849515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.849689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.849716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.849865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.849897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.850038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.850063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.850217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.850243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.850395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.850422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.850599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.850625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.850765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.850791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.850953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.850980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.851142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.851169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.851339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.472 [2024-07-15 22:48:42.851365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.472 qpair failed and we were unable to recover it. 00:24:59.472 [2024-07-15 22:48:42.851518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.851544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.851696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.851722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.851865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.851896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.852055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.852080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.852256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.852284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.852428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.852454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.852595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.852621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.852769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.852794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.852986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.853013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.853155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.853181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.853318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.853344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.853503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.853529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.853670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.853695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.853837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.853863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.854027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.854053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.854225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.854251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.854411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.854437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.854616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.854642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.854812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.854838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.855040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.855066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.855215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.855241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.855385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.855411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.855549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.855575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.855744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.855770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.855917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.855943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.856095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.856120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.856288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.856318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.856497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.856524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.856683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.856708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.856851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.856881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.857042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.857068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.857211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.857236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.857408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.857434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.857572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.857598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.473 [2024-07-15 22:48:42.857752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.473 [2024-07-15 22:48:42.857778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.473 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.857946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.857973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.858143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.858169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.858341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.858367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.858501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.858527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.858699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.858725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.858900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.858927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.859081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.859107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.859251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.859276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.859477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.859503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.859654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.859680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.859834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.859860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.860035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.860061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.860203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.860228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.860365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.860391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.860544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.860570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.860723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.860749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.860891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.860917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.861061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.861087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.861237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.861263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.861442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.861468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.861638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.861664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.861830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.861856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.862008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.862034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.862176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.862202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.862374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.862400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.862579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.862605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.862794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.862820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.862997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.863024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.863198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.863223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.863365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.863391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.863531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.863557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.863703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.863734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.863924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.863951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.864126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.864152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.864293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.864319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.864467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.864493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.864663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.864688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.864867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.864899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.865061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.865087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.865261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.865287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.474 [2024-07-15 22:48:42.865430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.474 [2024-07-15 22:48:42.865457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.474 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.865591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.865617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.865798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.865823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.865964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.865990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.866157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.866183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.866334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.866360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.866500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.866525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.866727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.866753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.866913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.866939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.867113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.867139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.867307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.867333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.867469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.867495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.867638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.867664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.867837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.867864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.868009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.868036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.868188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.868213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.868389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.868415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.868552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.868577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.868737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.868762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.868921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.868947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.869131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.869157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.869309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.869335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.869482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.869507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.869668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.869694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.869864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.869895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.870043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.870069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.870227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.870253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.870397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.870423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.870587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.870613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.870750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.870776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.870934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.870961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.871098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.871127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.871264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.871289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.871451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.871477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.871635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.871661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.871827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.871853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.872007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.872034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.872178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.872203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.872377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.872402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.872550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.872576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.872742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.872768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.475 [2024-07-15 22:48:42.872920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.475 [2024-07-15 22:48:42.872946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.475 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.873086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.873112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.873280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.873305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.873480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.873505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.873680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.873706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.873874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.873905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.874065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.874091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.874229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.874254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.874395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.874421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.874585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.874611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.874800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.874826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.874992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.875018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.875161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.875187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.875357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.875382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.875548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.875574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.875717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.875742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.875885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.875911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.876105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.876131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.876273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.876299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.876478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.876504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.876668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.876694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.876842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.876868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.877049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.877075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.877216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.877242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.877386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.877412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.877598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.877624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.877786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.877812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.877981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.878007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.878172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.878198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.878339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.878365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.878511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.878538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.878685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.878711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.878886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.878912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.879059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.879084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.879223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.879249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.879415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.879441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.879640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.879665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.879837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.879863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.880006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.880032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.880203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.880228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.880393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.880419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.476 qpair failed and we were unable to recover it. 00:24:59.476 [2024-07-15 22:48:42.880567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.476 [2024-07-15 22:48:42.880592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.880738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.880764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.880904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.880930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.881128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.881154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.881293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.881319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.881488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.881514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.881681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.881706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.881861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.881892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.882043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.882069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.882209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.882236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.882412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.882439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.882583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.882609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.882771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.882797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.882972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.882999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.883144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.883171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.883345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.883371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.883546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.883575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.883745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.883771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.883914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.883940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.884087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.884113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.884286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.884312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.884450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.884476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.884641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.884666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.884828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.884854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.885010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.885037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.885204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.885230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.885433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.885459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.885634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.885659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.885801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.885826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.885998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.886024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.886181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.886208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.886387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.886413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.886586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.886611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.886762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.886787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.886934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.477 [2024-07-15 22:48:42.886961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.477 qpair failed and we were unable to recover it. 00:24:59.477 [2024-07-15 22:48:42.887138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.887164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.887340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.887365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.887502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.887527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.887672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.887698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.887857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.887888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.888031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.888058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.888204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.888230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.888372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.888399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.888570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.888596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.888758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.888783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.888976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.889002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.889153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.889178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.889382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.889408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.889563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.889588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.889748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.889774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.889947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.889973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.890114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.890140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.890298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.890323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.890494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.890521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.890661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.890687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.890868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.890898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.891052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.891083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.891244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.891270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.891439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.891464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.891601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.891627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.891799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.891825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.891965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.891991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.892136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.892162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.892339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.892365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.892506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.892532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.892687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.892712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.892886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.892913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.893085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.893110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.893281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.893306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.893473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.893499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.893679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.893705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.893847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.893872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.894028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.894054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.894227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.894253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.894418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.894444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.894612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.894638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.478 qpair failed and we were unable to recover it. 00:24:59.478 [2024-07-15 22:48:42.894806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.478 [2024-07-15 22:48:42.894831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.894980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.895007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.895169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.895195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.895352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.895377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.895550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.895575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.895731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.895757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.895932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.895958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.896134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.896159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.896300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.896325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.896495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.896520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.896693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.896719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.896892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.896918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.897099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.897124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.897297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.897322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.897473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.897499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.897695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.897721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.897869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.897901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.898041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.898067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.898237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.898263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.898405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.898430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.898573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.898604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.898774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.898800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.898948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.898975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.899183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.899209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.899361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.899386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.899557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.899590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.899750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.899776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.899931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.899957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.900096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.900123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.900277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.900311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.900471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.900498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.900658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.900683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.900818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.900844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.901011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.901038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.901184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.901211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.901376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.901403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.901571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.901597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.901755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.901781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.901929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.901956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.902124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.902149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.902297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.479 [2024-07-15 22:48:42.902324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.479 qpair failed and we were unable to recover it. 00:24:59.479 [2024-07-15 22:48:42.902465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.902491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.902635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.902661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.902836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.902861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.903022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.903048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.903218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.903244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.903385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.903411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.903554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.903580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.903730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.903756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.903962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.903988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.904134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.904160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.904317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.904343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.904493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.904519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.904690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.904717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.904866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.904898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.905088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.905114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.905295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.905321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.905464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.905490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.905656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.905682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.905825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.905850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.906021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.906052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.906230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.906256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.906401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.906427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.906578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.906604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.906774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.906799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.906954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.906981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.907121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.907148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.907297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.907323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.907477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.907503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.907637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.907663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.907822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.907848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.908028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.908054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.908212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.908238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.908406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.908432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.908616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.908642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.908797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.908823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.909007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.909033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.909181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.909207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.909352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.909379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.909524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.909555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.909700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.909728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.909867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.909898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.910045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.910071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.910247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.910273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.480 qpair failed and we were unable to recover it. 00:24:59.480 [2024-07-15 22:48:42.910448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.480 [2024-07-15 22:48:42.910475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.910623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.910650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.910791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.910817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.910971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.910998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.911176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.911203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.911348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.911375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.911518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.911544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.911713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.911739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.911881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.911907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.912075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.912101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.912257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.912282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.481 qpair failed and we were unable to recover it. 00:24:59.481 [2024-07-15 22:48:42.912454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.481 [2024-07-15 22:48:42.912480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.912655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.912681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.912851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.912881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.913039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.913065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.913206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.913232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.913380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.913414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.913561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.913588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.913745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.913771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.913945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.913980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.914125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.914151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.914299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.914324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.914491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.914517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.914678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.914703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.914853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.914892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.915059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.915085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.915264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.915290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.915489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.915515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.915670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.915696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.915855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.915886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.916062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.916088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.916230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.916256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.916424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.916451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.916628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.916654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.916826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.916852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.917009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.917035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.917182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.917208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.917375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.917400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.917560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.917586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.917727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.917753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.917910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.917937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.918115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.918141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.918309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.918335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.918482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.918508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.918657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.918684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.918850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.918880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.919058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.919084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-15 22:48:42.919218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-15 22:48:42.919244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.919406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.919432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.919591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.919617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.919763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.919790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.919938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.919965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.920132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.920158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.920305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.920330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.920489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.920514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.920686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.920713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.920892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.920923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.921063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.921089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.921227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.921253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.921426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.921453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.921628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.921654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.921810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.921837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.922006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.922032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.922182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.922208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.922386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.922412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.922567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.922594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.922744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.922770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.922960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.922986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.923131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.923158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.923336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.923362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.923503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.923529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.923671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.923696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.923842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.923881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.924053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.924078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.924252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.924278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.924415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.924440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.924592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.924618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.924791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.924817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.924986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.925012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.925150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.925184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.925355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.925381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.925527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.925554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.925698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.925724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.925869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.925901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.926044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.926070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.926221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.926247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.926424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.926450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.926592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-15 22:48:42.926617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-15 22:48:42.926757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.926783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.926923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.926950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.927095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.927123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.927287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.927313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.927469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.927495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.927671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.927698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.927864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.927896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.928042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.928070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.928230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.928261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.928434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.928461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.928649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.928675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.928846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.928872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.929026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.929051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.929196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.929222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.929377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.929402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.929567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.929593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.929734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.929760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.929921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.929948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.930122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.930147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.930285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.930310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.930483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.930509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.930657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.930682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.930831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.930857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.931035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.931060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.931229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.931255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.931432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.931458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.931614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.931640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.931799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.931825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.931966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.931993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.932140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.932166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.932320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.932346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.932553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.932579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.932720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.932746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.932885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.932911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.933101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.933127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.933286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.933312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.933487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.933513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.933679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.933704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.933879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.933906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.934080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.934105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-15 22:48:42.934259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-15 22:48:42.934284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.934441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.934467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.934619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.934645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.934784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.934809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.934959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.934986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.935152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.935178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.935324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.935349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.935518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.935545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.935697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.935727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.935872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.935903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.936069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.936095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.936234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.936259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.936428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.936453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.936626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.936652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.936795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.936821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.936970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.936997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.937135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.937161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.937330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.937356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.937508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.937533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.937692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.937718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.937894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.937921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.938093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.938119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.938372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.938397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.938550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.938576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.938746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.938772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.938928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.938954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.939142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.939173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.939344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.939369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.939512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.939537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.939676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.939701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.939848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.939875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.940058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.940084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.940229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.940254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.940399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.940424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.940597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.940624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.940775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.940802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.941002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.941029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.941175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.941201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.941340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.941367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.941501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.941527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.941669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-15 22:48:42.941695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-15 22:48:42.941832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.941858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.942037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.942063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.942232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.942258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.942402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.942427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.942569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.942596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.942736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.942762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.942908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.942934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.943093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.943123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.943298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.943323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.943463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.943489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.943638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.943663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.943802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.943827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.943978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.944005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.944143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.944169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.944339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.944365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.944540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.944566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.944705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.944732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.944909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.944936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.945076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.945102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.945291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.945317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.945475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.945501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.945640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.945666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.945808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.945835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.945991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.946017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.946180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.946206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.946368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.946393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.946562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.946588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.946752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.946778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.946955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-15 22:48:42.946982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-15 22:48:42.947128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.947154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.947357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.947383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.947543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.947569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.947753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.947779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.947919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.947945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.948101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.948128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.948286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.948311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.948483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.948508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.948649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.948675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.948811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.948836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.948991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.949018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.949153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.949179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.949334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.949361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.949516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.949542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.949718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.949744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.949886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.949913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.950056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.950082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.950221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.950247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.950431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.950463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.950612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.950638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.950801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.950826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.951007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.951033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.951171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.951197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.951365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.951391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.951530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.951555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.951714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.951740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.951898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.951924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.952065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.952090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.952242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.952268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.952456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.952482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.952646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.952672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.952808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.952834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.953011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.953037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.953181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.953206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.953361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.953387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.953564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.953590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.953745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.953770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.953948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.953974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.954115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.954140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.954295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.954321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.954460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-15 22:48:42.954485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-15 22:48:42.954625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.954650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.954821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.954847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.955031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.955057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.955205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.955231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.955407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.955434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.955582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.955607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.955795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.955820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.955977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.956004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.956153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.956179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.956351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.956376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.956558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.956584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.956760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.956786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.956933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.956959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.957142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.957167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.957343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.957368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.957542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.957567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.957710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.957735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.957899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.957930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.958087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.958112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.958304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.958330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.958472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.958499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.958651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.958677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.958827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.958854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.959017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.959043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.959196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.959223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.959361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.959387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.959541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.959567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.959741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.959766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.959913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.959941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.960154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.960180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.960336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.960362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.960539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.960565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.960729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.960754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.960898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.960925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.961096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.961121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.961276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.961302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.961445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.961472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.961628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.961653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.961790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.961816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.961969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.961996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-15 22:48:42.962182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-15 22:48:42.962207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.962345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.962371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.962509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.962534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.962707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.962732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.962891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.962918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.963068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.963094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.963265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.963292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.963432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.963458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.963635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.963660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.963859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.963889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.964036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.964061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.964231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.964257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.964401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.964426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.964589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.964615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.964788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.964814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.964971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.964997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.965149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.965174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.965359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.965389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.965548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.965574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.965724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.965749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.965900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.965926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.966074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.966100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.966262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.966287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.966438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.966464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.966605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.966631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.966769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.966794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.966953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.966979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.967122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.967149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.967290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.967316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.967490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.967516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.967680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.967706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.967846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.967872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.968052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.968078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.968225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.968251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.968419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.968445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.968586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.968613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.968803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.968829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.968987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.969013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.969184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.969209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.969367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.969392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.969574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.969600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-15 22:48:42.969768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-15 22:48:42.969794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.969953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.969979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.970150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.970177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.970351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.970377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.970525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.970551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.970716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.970742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.970874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.970905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.971046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.971072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.971227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.971252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.971391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.971416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.971590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.971616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.971755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.971781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.971922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.971948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.972114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.972140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.972272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.972298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.972453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.972479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.972647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.972677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.972822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.972848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.972995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.973022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.973182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.973208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.973373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.973398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.973548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.973574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.973734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.973759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.973905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.973931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.974077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.974104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.974281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.974307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.974450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.974477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.974665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.974691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.974835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.974861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.975038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.975064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.975236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.975262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.975436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.975470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.975611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.975637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.975779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.975805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-15 22:48:42.975964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-15 22:48:42.975991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.976153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.976180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.976348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.976374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.976551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.976577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.976725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.976751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.976918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.976955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.977096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.977123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.977265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.977291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.977451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.977477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.977654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.977684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.977852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.977883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.978027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.978053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.978201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.978227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.978385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.978411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.978611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.978637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.978792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.978818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.978989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.979016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.979189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.979216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.979373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.979400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.979540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.979566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.979722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.979749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.979938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.979965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.980117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.980143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.980339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.980365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.980522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.980548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.980726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.980752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.980899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.980925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.981098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.981124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.981298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.981324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.981470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.981496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.981660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.981685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.981833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.981859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.982049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.982075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.982222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.982249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.982394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.982422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.982605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.982632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.982816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.982842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.983035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.983062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.983209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.983235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.983393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.983419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.983562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.983588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-15 22:48:42.983727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-15 22:48:42.983753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.983946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.983972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.984114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.984140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.984282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.984318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.984465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.984492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.984645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.984671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.984815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.984840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.985000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.985027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.985194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.985225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.985363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.985389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.985529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.985555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.985702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.985727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.985880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.985906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.986074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.986100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.986272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.986297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.986445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.986471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.986642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.986667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.986831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.986858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.987042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.987068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.987266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.987292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.987460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.987486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.987659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.987685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.987841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.987866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.988033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.988060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.988231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.988258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.988410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.988436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.988596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.988622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.988766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.988792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.988999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.989026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.989226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.989252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.989397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.989424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.989612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.989637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.989803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.989828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.989992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.990019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.990164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.990190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.990365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.990391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.990567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.990593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.990746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.990772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.990927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.990954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.991102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.991128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.991319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.991346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-15 22:48:42.991492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-15 22:48:42.991518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.991662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.991688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.991856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.991898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.992042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.992068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.992211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.992237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.992379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.992405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.992546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.992572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.992710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.992740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.992913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.992940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.993087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.993114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.993266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.993292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.993427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.993453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.993588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.993614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.993757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.993783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.993962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.993989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.994131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.994157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.994321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.994348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.994519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.994545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.994699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.994724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.994883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.994909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.995057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.995083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.995266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.995291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.995430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.995456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.995601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.995626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.995804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.995830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.996001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.996028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.996171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.996196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.996337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.996363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.996505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.996531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.996690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.996716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.996894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.996920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.997084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.997110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.997265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.997291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.997433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.997459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.997607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.997632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.997821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.997846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.998005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.998031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.998165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.998190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.998359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.998384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.998531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.998557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.998704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.998730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-15 22:48:42.998867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-15 22:48:42.998898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:42.999070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:42.999095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:42.999259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:42.999284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:42.999427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:42.999453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:42.999621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:42.999647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:42.999814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:42.999839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:42.999987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.000017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.000187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.000213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.000388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.000413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.000569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.000595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.000733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.000759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.000919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.000947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.001121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.001147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.001303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.001330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.001480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.001506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.001652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.001678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.001838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.001863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.002012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.002038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.002185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.002211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.002379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.002404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.002550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.002578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.002716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.002743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.002952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.002979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.003129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.003156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.003320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.003346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.003515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.003541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.003695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.003722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.003869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.003899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.004069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.004095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.004267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.004293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.004433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.004459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.004610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.004636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.004771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.004797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.004956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.004982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.005118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.005144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.005291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.005317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.005473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.005499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.005641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.005667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.005866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.005898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.006056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.006081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-15 22:48:43.006222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-15 22:48:43.006248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.006405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.006430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.006600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.006626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.006764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.006791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.006982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.007008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.007150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.007176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.007377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.007407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.007552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.007578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.007745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.007771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.007971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.007997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.008140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.008166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.008334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.008360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.008511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.008538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.008676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.008702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.008866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.008908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.009049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.009075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.009288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.009313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.009458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.009484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.009673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.009700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.009861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.009892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.010054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.010080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.010282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.010307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.010479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.010506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.010697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.010723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.010905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.010931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.011074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.011099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.011255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.011281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.011451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.011477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.011619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.011644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.011797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.011822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.012000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.012026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.012203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.012229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.012372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.012397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.012576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.012602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.012741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.012767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.012950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.012976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-15 22:48:43.013140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-15 22:48:43.013166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.013321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.013348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.013492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.013518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.013656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.013682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.013853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.013897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.014039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.014065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.014233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.014259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.014401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.014427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.014571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.014598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.014765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.014790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.014982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.015013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.015170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.015197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.015336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.015361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.015518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.015544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.015685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.015711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.015856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.015888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.016029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.016054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.016229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.016254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.016411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.016437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.016597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.016623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.016799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.016824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.017014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.017040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.017181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.017207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.017349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.017375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.017525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.017552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.017729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.017755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.017898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.017924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.018094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.018120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.018262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.018288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.018456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.018481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.018641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.018667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.018805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.018830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.018966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.018992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.019167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.019193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.019334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.019359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.019523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.019549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.019717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.019743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.019890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.019916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.020082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.020107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.020249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.020275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.020442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.773 [2024-07-15 22:48:43.020468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.773 qpair failed and we were unable to recover it. 00:24:59.773 [2024-07-15 22:48:43.020626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.020651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.020797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.020823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.020994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.021019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.021173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.021198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.021334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.021360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.021538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.021563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.021726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.021752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.021904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.021929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.022063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.022089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.022242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.022272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.022435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.022460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.022600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.022626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.022816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.022842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.023020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.023046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.023183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.023208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.023359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.023385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.023537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.023563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.023732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.023758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.023898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.023925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.024064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.024089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.024269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.024295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.024434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.024459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.024598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.024624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.024781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.024807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.024978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.025004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.025170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.025195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.025347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.025374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.025523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.025549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.025720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.025746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.025936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.025962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.026137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.026163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.026332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.026357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.026518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.026544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.026727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.026752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.026894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.026921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.027097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.027122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.027263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.027289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.027469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.027495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.027676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.027701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.027849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.027880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.028069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.774 [2024-07-15 22:48:43.028095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.774 qpair failed and we were unable to recover it. 00:24:59.774 [2024-07-15 22:48:43.028240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.028266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.028411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.028438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.028609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.028634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.028811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.028837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.028989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.029016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.029156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.029186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.029340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.029366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.029509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.029535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.029706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.029736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.029886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.029913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.030054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.030080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.030217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.030243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.030410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.030435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.030615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.030640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.030809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.030835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.031016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.031043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.031185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.031212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.031352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.031378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.031521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.031546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.031692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.031719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.031900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.031926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.032064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.032089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.032257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.032284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.032420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.032446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.032615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.032641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.032792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.032817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.032957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.032983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.033149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.033175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.033367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.033392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.033536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.033561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.033713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.033740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.033892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.033918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.034069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.034095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.034248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.034274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.034412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.034437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.034639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.034665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.034836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.034861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.035022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.035048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.035207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.035233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.035405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.035431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.775 [2024-07-15 22:48:43.035591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.775 [2024-07-15 22:48:43.035617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.775 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.035758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.035784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.035942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.035968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.036136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.036162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.036333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.036360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.036516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.036541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.036675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.036701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.036844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.036869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.037026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.037056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.037196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.037221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.037365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.037392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.037564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.037590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.037729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.037755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.037910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.037936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.038105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.038131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.038285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.038312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.038460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.038486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.038624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.038650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.038816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.038842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.038985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.039011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.039213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.039239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.039396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.039421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.039574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.039600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.039736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.039762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.039951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.039977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.040158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.040183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.040323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.040349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.040520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.040545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.040711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.040736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.040900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.040926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.041099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.041125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.041277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.041302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.041479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.041505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.041645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.041671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.041837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.041863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.776 [2024-07-15 22:48:43.042035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.776 [2024-07-15 22:48:43.042061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.776 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.042221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.042247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.042411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.042437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.042579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.042604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.042747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.042774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.042921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.042948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.043114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.043140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.043276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.043302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.043443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.043468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.043643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.043669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.043816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.043842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.043986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.044012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.044181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.044207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.044376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.044405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.044545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.044570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.044716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.044742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.044921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.044947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.045108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.045133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.045273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.045299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.045440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.045467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.045643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.045669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.045813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.045839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.045983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.046010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.046177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.046203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.046349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.046374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.046520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.046546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.046691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.046717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.046889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.046916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.047077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.047103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.047251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.047276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.047444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.047470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.047657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.047683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.047869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.047899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.048044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.048070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.048252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.048278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.048419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.048444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.048596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.048621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.048785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.048811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.049031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.049057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.049218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.049243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.049382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.049408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.777 [2024-07-15 22:48:43.049584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.777 [2024-07-15 22:48:43.049610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.777 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.049758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.049783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.049955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.049981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.050122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.050148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.050324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.050350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.050489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.050515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.050655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.050681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.050834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.050861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.051032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.051059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.051236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.051262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.051466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.051492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.051634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.051660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.051803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.051832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.051974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.052000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.052145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.052170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.052367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.052392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.052533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.052558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.052696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.052721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.052890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.052917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.053087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.053113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.053260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.053286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.053428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.053455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.053600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.053628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.053762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.053788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.053952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.053979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.054120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.054147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.054319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.054345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.054491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.054517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.054683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.054708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.054855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.054885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.055049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.055075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.055220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.055247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.055423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.055449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.055600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.055626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.055773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.055799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.055974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.056001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.056136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.056162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.056308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.056335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.056504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.056530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.056706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.056732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.056906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.056932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.778 [2024-07-15 22:48:43.057102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.778 [2024-07-15 22:48:43.057128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.778 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.057290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.057316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.057455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.057481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.057669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.057695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.057836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.057861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.058061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.058089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.058235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.058261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.058429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.058455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.058636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.058662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.058811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.058837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.058982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.059008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.059178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.059207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.059378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.059403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.059590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.059616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.059753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.059778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.059923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.059950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.060121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.060147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.060325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.060352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.060494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.060520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.060675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.060701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.060843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.060870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.061015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.061041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.061177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.061203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.061401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.061427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.061572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.061598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.061780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.061807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.061968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.061995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.062136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.062162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.062306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.062333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.062494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.062520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.062661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.062687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.062867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.062898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.063068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.063095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.063242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.063268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.063403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.063429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.063602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.063629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.063774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.063799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.063951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.063977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.064161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.064187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.064326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.064352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.064524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.064549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.779 [2024-07-15 22:48:43.064727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.779 [2024-07-15 22:48:43.064753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.779 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.064896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.064922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.065067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.065092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.065262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.065288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.065463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.065488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.065631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.065657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.065836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.065862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.066023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.066049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.066203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.066228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.066366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.066391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.066528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.066558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.066759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.066785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.066963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.066989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.067137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.067163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.067332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.067358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.067510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.067536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.067677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.067703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.067874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.067905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.068064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.068090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.068257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.068283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.068431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.068457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.068646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.068672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.068844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.068870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.069024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.069050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.069195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.069220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.069394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.069419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.069593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.069620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.069759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.069785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.069957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.069983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.070126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.070153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.070321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.070347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.070497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.070522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.070669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.070695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.070865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.070895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.071042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.071068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.780 qpair failed and we were unable to recover it. 00:24:59.780 [2024-07-15 22:48:43.071215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.780 [2024-07-15 22:48:43.071240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.071387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.071413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.071549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.071579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.071759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.071785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.071962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.071988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.072127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.072153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.072297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.072323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.072495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.072520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.072655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.072680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.072833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.072859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.072999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.073025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.073166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.073191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.073361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.073386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.073532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.073557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.073714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.073739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.073891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.073917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.074062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.074088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.074259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.074285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.074454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.074479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.074652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.074678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.074854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.074885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.075024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.075050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.075190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.075217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.075383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.075409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.075566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.075592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.075746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.075772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.075956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.075982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.076118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.076144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.076282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.076309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.076453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.076479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.076643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.076669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.076842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.076867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.077052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.077078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.077217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.077243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.077408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.077434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.077612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.077639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.077780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.077806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.077957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.077984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.078143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.078168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.078327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.078353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.078521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.781 [2024-07-15 22:48:43.078547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.781 qpair failed and we were unable to recover it. 00:24:59.781 [2024-07-15 22:48:43.078689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.078714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.078852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.078887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.079029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.079054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.079230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.079257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.079403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.079429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.079613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.079639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.079800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.079826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.080020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.080047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.080191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.080217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.080356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.080381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.080551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.080577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.080767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.080792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.080947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.080973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.081115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.081140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.081323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.081348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.081523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.081549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.081692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.081718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.081891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.081918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.082064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.082091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.082273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.082298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.082470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.082495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.082662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.082688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.082854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.082885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.083038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.083064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.083203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.083229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.083373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.083398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.083566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.083592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.083734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.083760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.083915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.083942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.084109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.084135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.084302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.084327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.084467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.084493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.084627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.084652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.084840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.084866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.085048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.085074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.085241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.085267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.085434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.085460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.085602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.085627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.085817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.085842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.085992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.086020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.086191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.782 [2024-07-15 22:48:43.086217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.782 qpair failed and we were unable to recover it. 00:24:59.782 [2024-07-15 22:48:43.086386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.086416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.086560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.086586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.086745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.086771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.086978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.087005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.087181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.087206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.087399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.087425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.087565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.087590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.087760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.087785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.087953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.087979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.088162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.088188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.088323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.088348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.088491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.088516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.088702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.088728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.088895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.088922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.089082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.089108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.089254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.089280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.089418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.089443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.089628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.089654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.089857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.089887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.090030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.090055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.090204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.090229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.090374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.090400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.090543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.090569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.090709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.090734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.090884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.090911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.091081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.091107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.091285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.091311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.091486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.091511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.091662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.091689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.091860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.091891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.092042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.092068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.092201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.092227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.092375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.092400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.092588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.092613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.092753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.092779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.092920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.092946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.093121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.093147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.093301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.093327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.093506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.093532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.093671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.093696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.093834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.783 [2024-07-15 22:48:43.093864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.783 qpair failed and we were unable to recover it. 00:24:59.783 [2024-07-15 22:48:43.094064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.094090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.094230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.094256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.094413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.094439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.094611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.094638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.094780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.094806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.094980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.095006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.095173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.095198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.095346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.095373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.095514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.095541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.095706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.095732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.095898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.095924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.096063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.096089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.096266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.096291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.096461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.096487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.096629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.096655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.096807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.096834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.096985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.097012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.097171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.097197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.097334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.097360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.097533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.097559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.097698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.097724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.097906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.097932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.098110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.098136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.098281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.098306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.098448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.098473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.098612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.098639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.098812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.098838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.098999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.099026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.099200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.099225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.099375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.099400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.099540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.099566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.099738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.099763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.099903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.099929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.100068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.100095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.784 [2024-07-15 22:48:43.100243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.784 [2024-07-15 22:48:43.100270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.784 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.100409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.100435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.100596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.100622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.100761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.100787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.100928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.100954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.101130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.101160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.101361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.101386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.101536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.101561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.101700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.101725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.101886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.101912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.102084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.102110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.102266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.102292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.102435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.102461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.102617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.102642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.102826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.102851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.103011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.103037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.103211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.103237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.103409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.103434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.103572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.103598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.103750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.103776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.103937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.103964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.104106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.104132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.104302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.104328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.104505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.104530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.104664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.104689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.104844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.104870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.105029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.105056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.105215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.105241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.105399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.105425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.105574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.105600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.105741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.105766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.105910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.105936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.106116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.106142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.106283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.106309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.106466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.106492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.106635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.106661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.106797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.106823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.106969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.106996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.107138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.107163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.107302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.107327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.107493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.107519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.785 [2024-07-15 22:48:43.107680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.785 [2024-07-15 22:48:43.107706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.785 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.107848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.107874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.108029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.108055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.108193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.108219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.108367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.108397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.108567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.108592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.108756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.108781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.108951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.108977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.109138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.109164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.109304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.109329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.109497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.109523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.109668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.109694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.109832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.109857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.110007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.110034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.110187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.110213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.110349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.110374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.110528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.110553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.110718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.110744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.110896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.110923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.111074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.111100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.111240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.111266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.111419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.111446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.111637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.111663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.111812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.111838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.112026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.112052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.112227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.112253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.112408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.112434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.112604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.112629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.112788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.112813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.112982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.113008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.113180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.113206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.113382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.113408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.113581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.113608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.113775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.113800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.113947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.113974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.114150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.114175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.114348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.114374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.114532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.114557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.114727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.114752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.114895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.114921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.115070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.115096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.786 qpair failed and we were unable to recover it. 00:24:59.786 [2024-07-15 22:48:43.115271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.786 [2024-07-15 22:48:43.115298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.115470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.115496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.115640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.115666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.115809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.115839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.116005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.116032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.116172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.116199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.116376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.116402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.116555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.116581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.116719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.116744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.116900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.116926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.117095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.117121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.117262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.117288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.117459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.117484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.117684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.117710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.117870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.117899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.118065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.118091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.118239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.118264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.118413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.118439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.118584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.118610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.118764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.118790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.118938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.118965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.119141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.119167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.119304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.119330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.119472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.119498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.119645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.119671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.119819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.119845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.120041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.120067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.120231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.120257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.120424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.120449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.120591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.120618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.120796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.120822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.120976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.121002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.121148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.121174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.121311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.121337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.121494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.121520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.121671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.121697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.121856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.121886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.122038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.122063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.122231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.122257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.122434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.122460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.122636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.122661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.787 qpair failed and we were unable to recover it. 00:24:59.787 [2024-07-15 22:48:43.122804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.787 [2024-07-15 22:48:43.122830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.122979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.123005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.123143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.123172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.123373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.123398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.123536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.123562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.123734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.123760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.123933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.123959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.124119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.124145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.124289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.124314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.124475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.124500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.124688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.124713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.124890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.124916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.125065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.125091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.125263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.125289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.125429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.125456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.125608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.125634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.125831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.125857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.126014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.126040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.126218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.126244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.126386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.126412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.126581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.126607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.126779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.126804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.126948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.126974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.127140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.127165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.127327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.127353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.127548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.127574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.127719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.127745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.127903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.127930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.128105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.128131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.128306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.128333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.128474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.128501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.128643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.128670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.128842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.128868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.129055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.129081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.129252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.129277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.129425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.129451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.129593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.129619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.129790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.129815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.129964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.129990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.130137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.130163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.130331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.130356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.788 qpair failed and we were unable to recover it. 00:24:59.788 [2024-07-15 22:48:43.130510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.788 [2024-07-15 22:48:43.130535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.130689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.130718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.130886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.130912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.131056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.131082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.131259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.131285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.131428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.131454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.131595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.131621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.131762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.131787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.131945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.131971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.132128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.132154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.132323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.132348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.132490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.132517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.132671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.132697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.132850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.132880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.133061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.133087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.133264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.133290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.133432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.133457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.133613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.133639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.133807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.133832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.133987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.134014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.134189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.134215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.134405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.134431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.134580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.134605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.134759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.134785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.134971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.134997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.135186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.135211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.135363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.135389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.135538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.135564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.135716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.135742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.135891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.135919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.136110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.136135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.136278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.136305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.136464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.136490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.136650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.136676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.136846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.789 [2024-07-15 22:48:43.136871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.789 qpair failed and we were unable to recover it. 00:24:59.789 [2024-07-15 22:48:43.137017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.137043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.137192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.137217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.137362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.137388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.137579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.137605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.137776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.137802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.137972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.137999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.138150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.138180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.138351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.138376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.138537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.138564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.138703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.138729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.138873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.138905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.139064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.139090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.139257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.139282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.139424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.139449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.139587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.139612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.139760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.139786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.139930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.139956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.140115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.140141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.140279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.140304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.140438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.140464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.140661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.140687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.140860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.140890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.141030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.141056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.141198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.141223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.141399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.141424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.141590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.141615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.141792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.141817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.141963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.141990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.142163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.142188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.142362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.142388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.142542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.142567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.142702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.142727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.142896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.142922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.143072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.143098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.143236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.143262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.143405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.143430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.143628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.143653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.143792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.143818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.143972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.143998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.144184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.144209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.144347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.790 [2024-07-15 22:48:43.144373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.790 qpair failed and we were unable to recover it. 00:24:59.790 [2024-07-15 22:48:43.144539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.144565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.144710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.144736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.144885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.144912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.145084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.145110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.145280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.145306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.145462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.145492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.145660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.145686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.145825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.145850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.145997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.146024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.146184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.146210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.146380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.146405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.146538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.146564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.146729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.146754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.146903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.146929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.147069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.147094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.147259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.147285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.147433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.147458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.147613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.147638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.147816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.147842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.147992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.148019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.148158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.148183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.148359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.148386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.148555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.148581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.148746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.148772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.148912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.148938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.149098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.149124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.149299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.149325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.149496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.149521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.149709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.149735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.149894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.149921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.150084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.150110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.150306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.150332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.150513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.150538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.150678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.150703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.150884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.150911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.151047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.151072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.151213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.151238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.151385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.151411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.151552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.151577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.151716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.151743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.151896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.791 [2024-07-15 22:48:43.151923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.791 qpair failed and we were unable to recover it. 00:24:59.791 [2024-07-15 22:48:43.152095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.152121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.152291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.152317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.152465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.152492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.152641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.152667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.152848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.152888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.153066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.153092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.153231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.153257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.153418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.153443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.153611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.153637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.153792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.153818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.153979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.154006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.154185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.154211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.154354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.154380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.154520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.154545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.154713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.154739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.154889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.154915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.155095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.155121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.155259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.155285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.155458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.155483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.155654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.155680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.155855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.155886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.156043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.156069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.156205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.156231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.156385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.156410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.156552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.156578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.156719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.156744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.156898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.156924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.157068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.157094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.157266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.157291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.157430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.157456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.157604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.157630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.157796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.157838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.158026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.158055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.158208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.158235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.158424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.158450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.158613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.158640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.158792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.158819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.158998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.159025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.159195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.159221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.159392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.159417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.159564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.792 [2024-07-15 22:48:43.159589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.792 qpair failed and we were unable to recover it. 00:24:59.792 [2024-07-15 22:48:43.159757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.159783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.159932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.159958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.160096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.160121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.160260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.160290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.160430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.160455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.160652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.160678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.160814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.160841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.161030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.161056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.161212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.161253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.161421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.161448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.161599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.161625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.161793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.161819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.161984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.162011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.162157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.162192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.162349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.162375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.162522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.162547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.162684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.162710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.162883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.162910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.163059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.163087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.163240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.163266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.163441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.163467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.163608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.163635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.163778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.163804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.163992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.164018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.164176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.164202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.164345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.164371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.164539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.164564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.164720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.164746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.164888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.164914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.165053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.165078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.165265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.165296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.165473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.165499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.165670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.165696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.165832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.165858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.166052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.793 [2024-07-15 22:48:43.166094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.793 qpair failed and we were unable to recover it. 00:24:59.793 [2024-07-15 22:48:43.166298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.166337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.166519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.166547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.166723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.166749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.166926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.166952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.167122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.167148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.167319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.167346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.167502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.167528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.167681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.167707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.167873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.167906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.168057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.168083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.168225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.168251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.168386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.168412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.168559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.168586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.168741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.168767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.168941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.168968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.169148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.169192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.169359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.169386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.169543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.169570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.169724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.169751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.169926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.169953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.170107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.170133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.170322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.170349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.170504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.170530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.170683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.170708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.170856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.170900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.171045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.171070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.171250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.171275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.171451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.171477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.171624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.171650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.171788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.171814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.171993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.172019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.172160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.172186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.172355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.172381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.172552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.172578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.172737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.172763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.172928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.172967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.173167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.173205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.173357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.173384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.173534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.173561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.173738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.173764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.173955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.794 [2024-07-15 22:48:43.173982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.794 qpair failed and we were unable to recover it. 00:24:59.794 [2024-07-15 22:48:43.174183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.174209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.174373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.174399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.174559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.174585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.174774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.174801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.174995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.175022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.175169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.175195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.175364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.175390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.175553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.175579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.175733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.175759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.175949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.175975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.176125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.176150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.176352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.176378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.176531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.176559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.176696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.176721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.176857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.176901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.177071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.177097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.177249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.177275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.177418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.177450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.177627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.177652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.177792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.177817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.177972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.177998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.178147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.178177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.178380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.178406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.178552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.178578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.178729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.178757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.178902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.178929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.179106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.179132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.179317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.179343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.179486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.179514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.179665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.179691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.179836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.179863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.180024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.180050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.180210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.180235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.180385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.180410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.180561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.180587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.180731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.180757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.180928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.180954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.181103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.181129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.181285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.181310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.181449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.181475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.181621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.181646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.795 qpair failed and we were unable to recover it. 00:24:59.795 [2024-07-15 22:48:43.181805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.795 [2024-07-15 22:48:43.181830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.181986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.182013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.182152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.182178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.182317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.182344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.182500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.182527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.182702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.182728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.182865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.182899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.183042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.183068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.183214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.183239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.183391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.183417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.183556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.183582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.183737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.183762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.183903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.183930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.184080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.184106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.184251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.184276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.184435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.184461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.184602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.184627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.184813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.184852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.185014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.185043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.185193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.185221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.185385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.185411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.185553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.185580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.185730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.185756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd668000b90 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.185920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.185947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.186121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.186147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.186319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.186344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.186519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.186545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.186686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.186711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.186884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.186910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.187053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.187078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.187219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.187246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.187403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.187428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.187570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.187595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.187767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.187793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.187953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.187980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.188128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.188153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.188299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.188325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.188496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.188521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.188696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.188722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.188859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.188900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.189042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.189068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.796 [2024-07-15 22:48:43.189265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.796 [2024-07-15 22:48:43.189291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.796 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.189430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.189456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.189629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.189655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.189806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.189831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.189994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.190019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.190189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.190214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.190348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.190373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.190528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.190557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.190698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.190723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.190860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.190892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.191031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.191056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.191202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.191228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.191365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.191390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.191569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.191594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.191745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.191771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.191910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.191936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.192077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.192103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.192260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.192286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.192418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.192444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.192598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.192623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.192767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.192793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.192945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.192971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.193126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.193151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.193321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.193346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.193487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.193513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.193681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.193706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.193888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.193914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.194049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.194075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.194240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.194266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.194419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.194445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.194613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.194638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.194777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.194802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.194967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.194994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.195136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.195162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.195315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.195344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.195479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.195504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.797 [2024-07-15 22:48:43.195657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.797 [2024-07-15 22:48:43.195683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.797 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.195821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.195846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.196002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.196028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.196177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.196202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.196353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.196379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.196521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.196547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.196724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.196749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.196898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.196925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.197074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.197099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.197240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.197265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.197430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.197456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.197627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.197652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.197790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.197816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.197954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.197981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.198143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.198168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.198340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.198365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.198542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.198567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.198713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.198738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.198924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.198950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.199089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.199114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.199273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.199298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.199464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.199490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.199649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.199674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.199851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.199882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.200045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.200070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.200243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.200268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.200420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.200446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.200587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.200613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.200753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.200779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.200921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.200947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.201088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.201113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.201259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.201284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.201427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.201452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.201589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.201614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.201779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.201804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.201949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.201975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.202113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.202138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.202291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.202317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.202456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.202482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.202618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.202647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.202815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.202840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.202995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.798 [2024-07-15 22:48:43.203021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.798 qpair failed and we were unable to recover it. 00:24:59.798 [2024-07-15 22:48:43.203189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.203214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.203355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.203381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.203514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.203540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.203679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.203704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.203845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.203870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.204018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.204043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.204210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.204235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.204382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.204407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.204543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.204568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.204729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.204754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.204902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.204928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.205073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.205099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.205249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.205274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.205412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.205437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.205573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.205598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.205776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.205801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.205971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.205997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.206133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.206158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.206293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.206319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.206472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.206497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.206648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.206673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.206831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.206857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.207027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.207052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.207214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.207239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.207377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.207402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.207600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.207625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.207758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.207784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.207937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.207963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.208102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.208127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.208292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.208317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.208463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.208488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.208663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.208688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.208824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.208849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.209030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.209055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.209198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.209223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.209367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.209393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.209547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.209572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.209746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.209772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.209933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.209959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.210101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.210126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.210304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.210329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.799 [2024-07-15 22:48:43.210468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.799 [2024-07-15 22:48:43.210494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.799 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.210635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.210660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.210809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.210834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.210980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.211006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.211186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.211211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.211395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.211420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.211591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.211616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.211785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.211810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.211963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.211989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.212131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.212156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.212314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.212340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.212545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.212571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.212717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.212742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.212898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.212924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.213072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.213097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.213237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.213262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.213406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.213431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.213591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.213616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.213795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.213819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.213990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.214016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.214182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.214207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.214346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.214371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.214544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.214570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.214732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.214757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.214898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.214928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.215129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.215154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.215294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.215319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.215482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.215508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.215656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.215683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.215855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.215885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.216052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.216078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.216231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.216257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.216394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.216419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.216568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.216594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.216743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.216768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.216934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.216960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.217116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.217141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.217288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.217314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.217453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.217479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.217641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.217666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.217803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.217829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.218026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-07-15 22:48:43.218052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-07-15 22:48:43.218224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.218250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.218392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.218417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.218568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.218593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.218779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.218804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.218979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.219006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.219146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.219172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.219338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.219363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.219520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.219546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.219716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.219742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.219898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.219924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.220086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.220111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.220306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.220331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.220484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.220509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.220648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.220673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.220838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.220863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.221031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.221057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.221193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.221218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.221385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.221411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.221553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.221579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.221750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.221776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.221920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.221947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.222093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.222118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.222256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.222281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.222421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.222450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.222610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.222635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.222805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.222830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.223002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.223028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.223172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.223197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.223355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.223380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.223551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.223577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.223741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.223766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.223931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.223956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.224113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.224139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.224307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.224332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.224478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.224505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.224658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.224684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.224823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.224848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-07-15 22:48:43.225009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 22:48:43.225035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.225210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.225235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.225403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.225428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.225568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.225594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.225768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.225793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.225934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.225960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.226128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.226154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.226295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.226320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.226491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.226517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.226683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.226709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.226852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.226884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.227039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.227065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.227223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.227248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.227391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.227421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.227589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.227615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.227760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.227785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.227962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.227988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.228135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.228161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.228345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.228371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.228537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.228563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.228711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.228737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.228890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.228916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.229058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.229085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.229228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.229254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.229399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.229424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.229562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.229587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.229759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.229785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.229940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.229966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.230100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.230125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.230307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.230333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.230481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.230507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-07-15 22:48:43.230641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 22:48:43.230667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.230830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.230855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.231012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.231038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.231182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.231208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.231385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.231410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.231582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.231607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.231775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.231800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.231963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.231989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.232149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.232174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.232317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.232342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.232532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.232558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.232694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.232720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.232896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.232922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.233067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.233093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.233232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.233257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.233396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.233422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.233559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.233585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.233754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.233779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.233926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.233952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.234099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.234124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.234296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.234322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.234475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.234500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.234671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.234697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.234833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.234863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.235013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.235038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.235196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.235222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.235398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.235423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.235590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.235615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.235769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.235794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.235946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.235973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.236142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.236168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.236336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.236361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.236537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.236562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.236696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.236722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.236889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.236915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.237058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.237083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.237240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.237266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.237405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.237430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.237578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.237603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.237747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.237772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.237925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.237952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.238145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-07-15 22:48:43.238171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-07-15 22:48:43.238339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-07-15 22:48:43.238364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-07-15 22:48:43.238533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-07-15 22:48:43.238558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-07-15 22:48:43.238747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-07-15 22:48:43.238772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-07-15 22:48:43.238951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-07-15 22:48:43.238977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-07-15 22:48:43.239148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-07-15 22:48:43.239173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-07-15 22:48:43.239357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-07-15 22:48:43.239382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-07-15 22:48:43.239520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-07-15 22:48:43.239545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-07-15 22:48:43.239691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-07-15 22:48:43.239716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-07-15 22:48:43.239860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-07-15 22:48:43.239894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.240034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.240060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.240211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.240237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.240401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.240427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.240569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.240594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.240743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.240769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.240923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.240950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.241096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.241122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.241268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.241294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.241442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.241468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.241612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.241638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.241832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.241858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.242040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.242066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.242204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.242230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.242377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.242403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.242569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.242594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.242730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.242755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.242928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.242954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.243132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.243158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.243296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.243322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.243459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.243485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.243647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.243672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.243822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.243847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.244003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.244030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.244171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.244197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.244365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.244391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.244536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.244561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.244703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.244728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.244883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.244909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.245053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.245078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.245246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.245272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.245420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.245447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.245589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.245615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.245751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.245776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.245946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.245973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.246143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.246168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.246317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-15 22:48:43.246342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-15 22:48:43.246482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.246508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.246648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.246674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.246852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.246883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.247064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.247089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.247254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.247283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.247427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.247452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.247595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.247621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.247760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.247785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.247947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.247974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.248123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.248148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.248306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.248331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.248495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.248521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.248686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.248712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.248857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.248896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.249086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.249112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.249279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.249304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.249447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.249473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.249641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.249667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.249815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.249841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.250012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.250037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.250196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.250222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.250372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.250397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.250532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.250557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.250722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.250748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.250921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.250947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.251111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.251137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.251316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.251341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.251493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.251518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.251662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.251687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.251834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.251861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.252021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.252047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.252221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.252251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.252391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.252416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.252556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.252581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.252716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.252741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.252885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.252911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.253047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.253073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.253240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.253265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.253411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.253438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.253606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.253631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.253770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.253796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.253966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.253992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.254157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.254182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.254325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.254350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.254490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.254515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.254706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.254732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.254887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.254913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.255051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.255077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.255223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.255251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.255392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.255417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.255580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.255606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.255746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.255772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.255933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.255959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.256098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.256124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.256284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.256309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.256454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.256481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.256678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.256704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.256866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.256898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.257049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.257075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.257251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.257277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.257411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.257436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.257605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.257630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.257797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.257822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.257967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.257993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.258142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.258167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.258313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.258338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.258479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.258505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.258651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.258677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.258820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.258846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.258987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.259014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.259161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.259186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.259329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.259355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.259552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.259582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.259727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.259753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.259914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.259940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.260087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.260114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.260284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.260309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.260474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.260499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.260644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-15 22:48:43.260669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-15 22:48:43.260836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.260862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.261025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.261051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.261229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.261254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.261441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.261467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.261601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.261627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.261795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.261820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.261982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.262008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.262167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.262192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.262361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.262386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.262530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.262556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.262701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.262726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.262901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.262926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.263063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.263088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.263251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.263277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.263426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.263451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.263615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.263641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.263783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.263809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.263989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.264015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.264191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.264216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.264363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.264389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.264522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.264548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.264723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.264748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.264919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.264946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.265082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.265108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.265253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.265279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.265426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.265452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.265591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.265617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.265753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.265778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.265953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.265980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.266157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.266183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.266372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.266398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.266555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.266580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.266747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.266772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.266929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.266956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.267100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.267126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.267264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.267289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.267440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.267466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.267617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.267642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.267783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.267808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.267945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.267971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.268125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.268151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.268324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.268349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.268515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.268540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.268682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.268707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.268887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.268913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.269055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.269082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.269270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.269295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.269442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.269468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.269609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.269635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.269782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.269807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.269983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.270009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.270161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.270187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.270332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.270357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.270506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.270532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.270673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.270699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.270871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.270902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.271041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.271068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.271204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.271230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.271377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.271402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.271588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.271613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.271750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.271775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.271947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.271977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.272151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.272176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.272314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.272339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.272478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.272504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.272674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.272700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.272836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.272862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.273050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.273076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.273231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.273256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.273410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-15 22:48:43.273436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-15 22:48:43.273585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.273610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.273754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.273779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.273919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.273945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.274125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.274150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.274292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.274317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.274478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.274503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.274638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.274663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.274806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.274833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.275026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.275053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.275197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.275223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.275395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.275420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.275561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.275586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.275725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.275751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.275917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.275943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.276081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.276106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.276277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.276303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.276450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.276476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.276649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.276675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.276815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.276840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.277028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.277054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.277202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.277228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.277390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.277415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.277590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.277616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.277788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.277813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.277988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.278014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.278184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.278209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.278356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.278382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.278528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.278554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.278695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.278720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.278864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.278897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.279071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.279096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.279253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.279278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.279453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.279479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.279632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.279657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.279797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.279822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.279988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.280014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.280150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.280176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.280309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.280335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.280490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.280516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.280673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.280699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.280855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.280886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.281086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.281112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.281297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.281322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.281491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.281516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.281653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.281680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.281822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.281847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.282028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.282055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.282207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.282232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.282387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.282412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.282586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.282611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.282777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.282803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.282950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.282976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.283117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.283143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.283301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.283326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.283468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.283493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.283636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.283662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.283827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.283852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.284009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.284035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.284207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.284232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.284385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.284415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.284554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.284580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.284722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.284748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.284887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.284913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.285057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.285082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.285251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.285276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.285415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.285440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.285591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.285617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.285764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.285789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.285972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.285998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.286168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.286194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.286357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.286382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.286526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.286551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.286697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.286723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.286871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.286903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.287043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.287068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.287206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.287231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.287397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.287423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.287602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.287627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.287795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.287820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-15 22:48:43.287971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-15 22:48:43.287998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.288139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.288164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.288335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.288362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.288504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.288530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.288710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.288736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.288874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.288905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.289051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.289078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.289218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.289244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.289419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.289445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.289645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.289671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.289819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.289846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.290022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.290047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.290197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.290224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.290392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.290418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.290562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.290588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.290721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.290747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.290887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.290913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.291058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.291084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.291219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.291245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.291392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.291418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.291589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.291615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.291802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.291832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.291988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.292015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.292150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.292176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.292323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.292349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.292510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.292536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.292673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.292699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.292850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.292894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.293055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.293081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.293222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.293248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.293412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.293438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.293572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.293598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.293792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.293830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.293994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.294021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.294157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.294182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.294351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.294377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.294510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.294536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.294685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.294711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.294885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.294911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.295082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.295108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.295257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.295283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.295416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.295441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.295611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.295637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.295813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.295839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.295980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.296006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.296176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.296202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.296370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.296396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.296540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.296566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.296712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.296743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.296898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.296924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.297120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.297146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.297300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.297326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.297476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.297502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.297675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.297701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.297853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.297884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.298025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.298050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.298188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.298213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.298384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.298409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.298560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.298586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.298732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.298758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.298927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.298953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.299089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.299116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.299295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.299322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.299461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.299487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.299657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.299683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.299839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.299865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.300011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.300037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.300208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.300234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.300420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.300445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.300604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.300630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.300808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.300835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.300975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.301001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.301172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.301198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.301339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.301365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.301531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.301557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.301694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.301720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.301885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.301911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.302079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.302105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-15 22:48:43.302269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-15 22:48:43.302294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.302436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.302462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.302602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.302628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.302769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.302794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.302944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.302970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.303111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.303137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.303289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.303315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.303484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.303510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.303689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.303715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.303869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.303899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.304044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.304070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.304229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.304259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.304403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.304429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.304573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.304598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.304734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.304760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.304926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.304952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.305096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.305122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.305260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.305286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.305472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.305497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.305681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.305706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.305850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.305889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.306071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.306097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.306286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.306312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.306457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.306484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.306659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.306685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.306837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.306862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.307051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.307077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.307229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.307255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.307398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.307423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.307573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.307599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.307737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.307762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.307937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.307963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.308121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.308146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.308340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.308366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.308516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.308541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.308684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.308710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.308857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.308888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.309032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.309058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.309196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.309226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.309372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.309398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.309545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.309570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.309744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.309770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.309917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.309944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.310092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.310118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.310255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.310281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.310438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.310463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.310600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.310625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.310799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.310824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.311012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.311039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.311180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.311205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.311343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.311368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.311567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.311593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.311766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.311793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.311968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.311995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.312138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.312164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.312335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.312361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.312496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.312522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.312682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.312708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.312852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.312884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.313054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.313080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.313227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.313253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.313390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.313415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.313583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.313609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.313744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.313770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.313939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.313965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.314101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.314127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.314270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.314295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.314436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.314462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.314621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.314647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.314802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.314827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.314968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.314995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.315150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.315176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.315348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.315374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-15 22:48:43.315541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-15 22:48:43.315567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.315755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.315780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.315930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.315958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.316105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.316131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.316286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.316311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.316448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.316473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.316642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.316672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.316850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.316881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.317026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.317052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.317187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.317221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.317384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.317410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.317570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.317595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.317738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.317764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.317910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.317936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.318097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.318123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.318277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.318302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.318460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.318485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.318627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.318653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.318807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.318833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.319007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.319033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.319218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.319244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.319396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.319422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.319562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.319588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.319753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.319778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.319930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.319957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.320147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.320173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.320314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.320340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.320480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.320506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.320674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.320699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.320836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.320863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.321060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.321086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.321222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.321248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.321399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.321425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.321593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.321623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.321760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.321785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.321955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.321982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.322118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.322144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.322302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.322328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.322470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.322496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.322644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.322669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.322835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.322861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.323017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.323043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.323208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.323234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.323373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.323399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.323565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.323591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.323731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.323757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.323928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.323955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.324107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.324134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.324288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.324314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.324460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.324486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.324629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.324655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.324795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.324820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.324990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.325017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.325152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.325178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.325356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.325382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.325534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.325560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.325693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.325718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.325858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.325889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.326040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.326066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.326240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.326265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.326425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.326451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.326593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.326619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.326765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.326793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.326943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.326969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.327137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.327162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.327300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.327326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.327496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.327522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.327674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.327699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.327833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.327859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.328030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.328056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.328203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.328230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.328397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.328423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.328580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.328606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.328764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.328790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.328932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.328971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.329122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.329148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.329291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.329316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.329462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.329488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.329661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.329686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.329823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.329849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-15 22:48:43.329999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-15 22:48:43.330025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.330167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.330192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.330325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.330351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.330530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.330555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.330711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.330739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.330886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.330913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.331073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.331099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.331277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.331303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.331479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.331505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.331647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.331672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.331813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.331839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.331985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.332012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.332167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.332193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.332362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.332388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.332542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.332568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.332708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.332733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.332893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.332919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.333070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.333096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.333235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.333261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.333408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.333434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.333573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.333599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.333749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.333774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.333939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.333965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.334100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.334126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.334266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.334291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.334447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.334473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.334675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.334700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.334834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.334860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.335047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.335073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.335229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.335255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.335426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.335452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.335605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.335631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.335766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.335791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.335961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.335987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.336123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.336149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.336302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.336328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.336498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.336523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.336659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.336684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.336829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.336854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.337010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.337036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.337176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.337201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.337367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.337392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.337587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.337613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.337810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.337835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.337988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.338015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.338173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.338198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.338333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.338359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.338558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.338584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.338743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.338769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.338941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.338967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.339104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.339130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.339290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.339316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.339504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.339529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.339665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.339690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.339829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.339855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.340000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.340026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.340183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.340209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.340367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.340393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.340545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.340570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.340758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.340783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.340923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.340949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.341126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.341151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.341296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.341325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.341529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.341555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.341696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.341722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.341909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.341934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.342107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.342132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.342282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.342307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.342455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.342480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.342616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.342641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.342820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.342846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.343000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.343026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.343169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.343195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-15 22:48:43.343355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-15 22:48:43.343380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.343531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.343557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.343717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.343743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.343939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.343966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.344136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.344162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.344341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.344366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.344502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.344527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.344676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.344701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.344849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.344875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.345036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.345061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.345203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.345228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.345389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.345414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.345552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.345577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.345745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.345770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.345938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.345964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.346117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.346142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.346317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.346343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.346553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.346578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.346722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.346748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.346902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.346928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.347108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.347133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.347271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.347297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.347460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.347486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.347655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.347681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.347824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.347849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.348004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.348031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.348190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.348217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.348395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.348420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.348593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.348618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.348783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.348809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.348973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.349003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.349141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.349167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.349317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.349342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.349480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.349506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.349676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.349702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.349853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.349885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.350025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.350050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.350229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.350255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.350423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.350448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.350621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.350646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.350786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.350811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.350971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.350997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.351139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.351166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.351356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.351382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.351535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.351560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.351702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.351729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.351866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.351903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.352091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.352116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.352260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.352286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.352424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.352449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.352609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.352634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.352794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.352819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.352992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.353018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.353187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.353214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.353368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.353393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.353563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.353588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.353741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.353766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.353911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.353941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.354089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.354116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.354253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.354279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.354424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.354450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.354623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.354650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.354786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.354812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.354979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.355005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.355174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.355200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.355379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.355404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.355546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.355573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.355717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.355742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.355913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.355939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.356083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.356109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.356248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.356273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.356437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.356462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.356601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.356626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.356798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.356824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.356976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.357002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.357185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.357210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.357350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.357376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.357536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.357562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.357700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-15 22:48:43.357726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-15 22:48:43.357883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.357908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.358081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.358108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.358296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.358322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.358478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.358504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.358658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.358683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.358828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.358854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.359035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.359061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.359199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.359225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.359370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.359395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.359583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.359608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.359777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.359803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.359962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.359988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.360146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.360172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.360345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.360371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.360534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.360559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.360712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.360738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.360905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.360931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.361093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.361119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.361268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.361294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.361456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.361488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.361633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.361658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.361809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.361834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.361990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.362017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.362152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.362177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.362333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.362358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.362526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.362551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.362693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.362718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.362862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.362892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.363088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.363113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.363274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.363299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.363461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.363486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.363629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.363656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.363802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.363828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.363979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.364006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.364154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.364179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.364350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.364376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.364530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.364556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.364697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.364723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.364870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.364901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.365073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.365099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.365242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.365268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.365437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.365463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.365655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.365681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.365816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.365842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.365996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.366022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.366221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.366246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.366381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.366411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.366576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.366602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.366771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.366796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.366958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.366984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.367160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.367185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.367327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.367352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.367491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.367516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.367652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.367677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.367818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.367843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.367994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.368020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.368187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.368212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.368383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.368409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.368563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.368589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.368786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.368811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.368964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.368991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.369139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.369165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.369327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.369353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.369495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.369522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.369663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.369689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.369835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.369860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.370014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.370039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.370207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.370233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.370371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.370397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.370555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.370580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.370712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.370738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.370915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.370941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-15 22:48:43.371085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-15 22:48:43.371110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.371246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.371272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.371447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.371472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.371625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.371650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.371793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.371818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.371995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.372021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.372186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.372212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.372380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.372406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.372558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.372585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.372737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.372763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.372946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.372972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.373126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.373152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.373295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.373321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.373468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.373494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.373639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.373665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.373800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.373829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.374015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.374041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.374184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.374211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.374366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.374392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.374537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.374564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.374710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.374735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.374887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.374913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.375053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.375089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.375241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.375267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.375427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.375452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.375599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.375625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.375798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.375823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.375964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.375991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.376161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.376187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.376329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.376354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.376485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.376510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.376682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.376710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.376852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.376893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.377070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.377097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.377256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.377282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.377452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.377478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.377618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.377644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.377817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.377843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.377999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.378025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.378171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.378196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.378353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.378380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.378531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.378557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.378735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.378766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.378926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.378952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.379111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.379137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.379302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.379327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.379502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.379527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.379695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.379721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.379856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.379887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.380048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.380074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.380235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.380261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.380419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.380445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.380603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.380629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.380828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.380853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.380999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.381025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.381195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.381220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.381367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.381392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.381545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.381570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.381725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.381750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.381923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.381950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.382157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.382183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.382340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.382365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.382506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.382531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.382677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.382704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.382868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.382899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.383044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.383070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.383214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.383239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.383404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.383430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.383582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.383607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.383743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.383769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.383927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.383954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.384131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.384156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.384305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.384330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.384480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.384505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.384680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.384705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.384843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.384868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.385026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.385051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.385187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.385212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.385348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.385373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-15 22:48:43.385507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-15 22:48:43.385533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.385707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.385733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.385898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.385924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.386085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.386110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.386249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.386279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.386426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.386452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.386592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.386617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.386774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.386800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.386945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.386971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.387130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.387155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.387340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.387365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.387521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.387546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.387692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.387717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.387859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.387888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.388064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.388090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.388254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.388279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.388446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.388472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.388617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.388643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.388792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.388818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.388990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.389016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.389159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.389184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.389322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.389348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.389500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.389525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.389663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.389688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.389859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.389901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.390047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.390072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.390244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.390269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.390410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.390435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.390598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.390623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.390777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.390802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.390971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.390997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.391139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.391165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.391341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.391366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.391507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.391532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.391671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.391696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.391830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.391856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.392014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.392039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.392187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.392214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.392387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.392413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.392558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.392584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.392721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.392747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.392893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.392920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.393064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.393089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.393225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.393251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.393446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.393471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.393626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.393651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.393800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.393825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.393994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.394020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.394159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.394184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.394329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.394355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.394497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.394523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.394659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.394685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.394836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.394861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.395039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.395064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.395207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.395233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.395369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.395394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.395584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.395610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.395810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.395835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.395988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.396014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.396155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.396181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.396315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.396340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.396482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.396509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.396686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.396711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.396870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.396901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.397059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.397084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.397253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.397278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.397414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.397439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.397609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.397634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.397795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-15 22:48:43.397820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-15 22:48:43.397987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.398013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.398188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.398215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.398358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.398383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.398518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.398550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.398690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.398715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.398857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.398889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.399063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.399089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.399223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.399248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.399398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.399423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.399572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.399597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.399738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.399763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.399911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.399937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.400081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.400107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.400294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.400319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.400454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.400480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.400648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.400673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.400811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.400836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.400987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.401013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.401161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.401186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.401327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.401352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.401506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.401532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.401698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.401723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.401890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.401916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.402055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.402080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.402218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.402243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.402382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.402408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.402549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.402574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.402739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.402764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.402926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.402952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.403119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.403144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.403293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.403318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.403463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.403488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.403636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.403661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.403810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.403836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.403981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.404007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.404152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.404177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.404317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.404342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.404487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.404513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.404656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.404683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:00.084 [2024-07-15 22:48:43.404858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.404889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # return 0 00:25:00.084 [2024-07-15 22:48:43.405039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.405065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.084 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.084 [2024-07-15 22:48:43.405204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.405230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.084 [2024-07-15 22:48:43.405383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.405413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.405625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.405650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.405806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.405831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.405977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.406004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.406150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.406177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.406329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.406354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.406525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.406559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.406700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.406725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.406903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.406930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.407105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.407131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.407306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.407332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.407476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.407502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.407659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.407684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.407871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.407901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.408046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.408072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.408208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.408234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.408400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.408425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.408591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.408617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.408774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.408799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.408946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.408973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.409111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.409137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.409314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.409340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.409512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.409537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.409671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.409697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.409892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.409918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.410086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.410113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.410266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.410292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.410433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.410463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.410614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.410650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.410815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.410841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.411019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.411046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.411183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.411208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.411358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.411383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-15 22:48:43.411521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-15 22:48:43.411547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.411697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.411722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.411898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.411925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.412083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.412108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.412276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.412302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.412487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.412513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.412651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.412677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.412837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.412863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.413026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.413051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.413196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.413222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.413406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.413432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.413619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.413645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.413798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.413824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.413990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.414016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.414157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.414183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.414335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.414361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.414525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.414551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.414701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.414728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.414902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.414928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.415103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.415129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.415304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.415329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.415465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.415501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.415646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.415672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.415840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.415865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.416031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.416058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.416213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.416239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.416390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.416416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.416560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.416594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.416730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.416755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.416919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.416946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.417095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.417121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.417274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.417300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.417469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.417494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.417632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.417657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.417803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.417828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.417998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.418028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.418199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.418225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.418392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.418418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.418582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.418608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.418754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.418780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.418916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.418942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.419087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.419113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.419260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.419286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.419472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.419498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.419661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.419686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.419857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.419897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.420037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.420062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.420203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.420228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.420367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.420392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.420572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.420598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.420740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.420765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.420908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.420934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.421084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.421109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.421281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.421306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.421445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.421471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.421612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.421637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.421771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.421802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.421960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.421986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.422126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.422151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.422295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.422320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.085 [2024-07-15 22:48:43.422490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.422515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:00.085 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.085 [2024-07-15 22:48:43.422665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.422694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.085 [2024-07-15 22:48:43.422841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.422874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.423025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.423051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.423205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.423231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.423406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.423432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.423584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.423610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.423744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.423770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.423957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.423984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.424127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.424153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.424334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.424359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.424536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.424562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.424726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.424752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.424889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.424915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.425072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.425102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-15 22:48:43.425263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-15 22:48:43.425290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.425460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.425485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.425624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.425650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.425788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.425813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.426024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.426049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.426202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.426228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.426382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.426407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.426685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.426711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.426885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.426911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.427048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.427072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.427238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.427263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.427435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.427460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.427662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.427687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.427832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.427857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.428002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.428027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.428178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.428204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.428366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.428391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.428543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.428568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.428736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.428761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.428909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.428935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.429077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.429102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.429276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.429301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.429450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.429475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.429641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.429667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.429836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.429861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.430132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.430158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.430312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.430341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.430581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.430607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.430776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.430802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.430983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.431009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.431159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.431184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.431347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.431373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.431549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.431575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.431745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.431771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.431914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.431940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.432077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.432102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.432276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.432301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.432439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.432464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.432605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.432630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.432779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.432804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.433012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.433038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.433184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.433210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.433352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.433377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.433520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.433545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.433693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.433718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.433859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.433888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.434063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.434089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.434260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.434286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.434431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.434456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.434623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.434649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.434818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.434844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.435012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.435038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.435188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.435214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.435363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.435388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.435532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.435558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.435707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.435734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.435908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.435934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.436104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.436129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.436302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.436328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.436501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.436526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.436698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.436723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.436881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.436908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.437081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.437106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.437265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.437290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.437527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.437552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.437727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.437753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.437898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.437924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.438080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.438114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.438303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.438329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.438499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.438524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.438661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.438686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.438865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.438926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.439093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.439119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.439306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.439331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-15 22:48:43.439488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-15 22:48:43.439513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.439689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.439715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.439882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.439908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.440061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.440086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.440234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.440259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.440429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.440455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.440602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.440627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.440793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.440818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.441000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.441026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.441205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.441230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.441372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.441397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.441571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.441596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.441796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.441821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.442004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.442031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.442177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.442203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.442351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.442378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.442526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.442551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.442697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.442724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.442895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.442922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.443082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.443107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.443256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.443286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.443455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.443481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.443630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.443657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.443846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.443873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.444021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.444047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.444231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.444256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.444395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.444422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.444563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.444589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.444758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.444784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 Malloc0 00:25:00.087 [2024-07-15 22:48:43.444976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.445002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.445149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.445175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:00.087 [2024-07-15 22:48:43.445335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.445362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.087 [2024-07-15 22:48:43.445498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.445524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.087 [2024-07-15 22:48:43.445671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.445696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.445849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.445874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.446066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.446092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.446267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.446292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.446434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.446459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.446631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.446657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.446798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.446824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.446974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.447001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.447146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.447172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.447318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.447343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.447522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.447547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.447690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.447715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.447887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.447913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.448060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.448086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.448259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.448284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.448342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.087 [2024-07-15 22:48:43.448451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.448476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.448624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.448649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.448837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.448873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.449047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.449072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.449217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.449242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.449408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.449433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.449580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.449605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.449755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.449781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.449937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.449964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.450101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.450127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.450296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.450322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.450478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.450506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.450645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.450670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.450840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.450868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.451027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.451052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.451206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.451232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.451375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.451401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.451538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.451564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.451700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.451725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.451898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.451924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.452073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.452098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.452272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.452298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.452437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.452462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.452632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.452657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.452816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.452842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.452998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.453024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.453169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.453195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.453357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.453383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.453556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.453581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.453719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.453744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-15 22:48:43.453891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-15 22:48:43.453917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.454063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.454088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.454269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.454295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.454464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.454489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.454658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.454684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.454828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.454853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.455006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.455033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.455193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.455219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.455388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.455418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.455608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.455633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.455783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.455810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.456002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.456029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.456172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.456197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.456335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.456361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.456494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.456520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.088 [2024-07-15 22:48:43.456677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.456702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.088 [2024-07-15 22:48:43.456844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.456870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.088 [2024-07-15 22:48:43.457023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.457049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.457197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.457222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.457373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.457399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.457571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.457601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.457775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.457801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.457948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.457974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.458129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.458156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.458303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.458329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.458472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.458498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.458662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.458688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.458831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.458857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.459008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.459034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.459188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.459214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.459374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.459400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.459538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.459564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.459709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.459735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.459887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.459913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.460082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.460108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.460288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.460313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.460457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.460482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.460628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.460653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.460840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.460866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.461029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.461054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.461195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.461220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.461418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.461443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.461593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.461618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.461787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.461813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.461971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.461997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.462137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.462163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.462307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.462332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.462498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.462523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.462667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.462693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.462846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.462872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.463049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.463075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.463274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.463300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 A controller has encountered a failure and is being reset. 00:25:00.088 [2024-07-15 22:48:43.463476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.463515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd658000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.463700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.463739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.463906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.463934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.464109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.464136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.464337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.464364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.464517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.464543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.088 [2024-07-15 22:48:43.464716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.464743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.088 [2024-07-15 22:48:43.464910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.464942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.088 [2024-07-15 22:48:43.465091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.465117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.465288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.465315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.465492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.465518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.465693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.465719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.465890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.465917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.466074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.466100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.466260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.466286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.466486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.466511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.466656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.466683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-15 22:48:43.466845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-15 22:48:43.466871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.467073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.467100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.467275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.467301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.467475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.467505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.467717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.467743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.467894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.467921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.468097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.468124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.468264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.468289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.468486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.468511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.468688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.468713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd660000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.468897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.468925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.469073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.469099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.469295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.469320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.469517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.469543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.469682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.469708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.469872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.469905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.470060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.470085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.470231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.470257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.470414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.470440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.470622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.470647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.470795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.470821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.470972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.470998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.471175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.471201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.471373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.471398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.471568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.471593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.471785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.471811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.471957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.471983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.472126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.472151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.472372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.472397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.472535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.472561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.089 [2024-07-15 22:48:43.472697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.472723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.089 [2024-07-15 22:48:43.472867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.089 [2024-07-15 22:48:43.472925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.089 [2024-07-15 22:48:43.473071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.473096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.473243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.473270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.473411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.473437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.473599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.473625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.473776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.473802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.473946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.473972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.474141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.474167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.474341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.474367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.474513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.474539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.474702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.474727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.474906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.474936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.475082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.475109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.475255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.475280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.475479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.475505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.475653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.475680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.475826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.475852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.476005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.476031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.476176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.476201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.476371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-15 22:48:43.476397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3f200 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.476568] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.089 [2024-07-15 22:48:43.479122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.089 [2024-07-15 22:48:43.479294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.089 [2024-07-15 22:48:43.479321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.089 [2024-07-15 22:48:43.479337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.089 [2024-07-15 22:48:43.479350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.089 [2024-07-15 22:48:43.479383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.089 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:00.089 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.089 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.089 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.089 22:48:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1364418 00:25:00.089 [2024-07-15 22:48:43.488960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.089 [2024-07-15 22:48:43.489108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.089 [2024-07-15 22:48:43.489134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.089 [2024-07-15 22:48:43.489149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.089 [2024-07-15 22:48:43.489162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.089 [2024-07-15 22:48:43.489190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.498973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.089 [2024-07-15 22:48:43.499118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.089 [2024-07-15 22:48:43.499143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.089 [2024-07-15 22:48:43.499158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.089 [2024-07-15 22:48:43.499171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.089 [2024-07-15 22:48:43.499199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.508991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.089 [2024-07-15 22:48:43.509142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.089 [2024-07-15 22:48:43.509167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.089 [2024-07-15 22:48:43.509181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.089 [2024-07-15 22:48:43.509194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.089 [2024-07-15 22:48:43.509223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.518960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.089 [2024-07-15 22:48:43.519107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.089 [2024-07-15 22:48:43.519133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.089 [2024-07-15 22:48:43.519148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.089 [2024-07-15 22:48:43.519161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.089 [2024-07-15 22:48:43.519188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.528972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.089 [2024-07-15 22:48:43.529121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.089 [2024-07-15 22:48:43.529147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.089 [2024-07-15 22:48:43.529162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.089 [2024-07-15 22:48:43.529175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.089 [2024-07-15 22:48:43.529203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.539028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.089 [2024-07-15 22:48:43.539166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.089 [2024-07-15 22:48:43.539191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.089 [2024-07-15 22:48:43.539206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.089 [2024-07-15 22:48:43.539218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.089 [2024-07-15 22:48:43.539246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.549005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.089 [2024-07-15 22:48:43.549156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.089 [2024-07-15 22:48:43.549182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.089 [2024-07-15 22:48:43.549197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.089 [2024-07-15 22:48:43.549210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.089 [2024-07-15 22:48:43.549238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-15 22:48:43.559079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.089 [2024-07-15 22:48:43.559232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.089 [2024-07-15 22:48:43.559257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.089 [2024-07-15 22:48:43.559272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.089 [2024-07-15 22:48:43.559284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.090 [2024-07-15 22:48:43.559312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.350 [2024-07-15 22:48:43.569121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.350 [2024-07-15 22:48:43.569278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.350 [2024-07-15 22:48:43.569305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.350 [2024-07-15 22:48:43.569319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.350 [2024-07-15 22:48:43.569338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.350 [2024-07-15 22:48:43.569367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.350 qpair failed and we were unable to recover it. 00:25:00.350 [2024-07-15 22:48:43.579127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.350 [2024-07-15 22:48:43.579286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.350 [2024-07-15 22:48:43.579311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.350 [2024-07-15 22:48:43.579326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.350 [2024-07-15 22:48:43.579339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.350 [2024-07-15 22:48:43.579367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.350 qpair failed and we were unable to recover it. 00:25:00.350 [2024-07-15 22:48:43.589175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.350 [2024-07-15 22:48:43.589330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.350 [2024-07-15 22:48:43.589356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.350 [2024-07-15 22:48:43.589370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.350 [2024-07-15 22:48:43.589383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.350 [2024-07-15 22:48:43.589411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.350 qpair failed and we were unable to recover it. 00:25:00.350 [2024-07-15 22:48:43.599190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.350 [2024-07-15 22:48:43.599339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.350 [2024-07-15 22:48:43.599364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.350 [2024-07-15 22:48:43.599379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.350 [2024-07-15 22:48:43.599392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.350 [2024-07-15 22:48:43.599420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.350 qpair failed and we were unable to recover it. 00:25:00.350 [2024-07-15 22:48:43.609223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.350 [2024-07-15 22:48:43.609367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.350 [2024-07-15 22:48:43.609392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.350 [2024-07-15 22:48:43.609406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.350 [2024-07-15 22:48:43.609419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.350 [2024-07-15 22:48:43.609447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.350 qpair failed and we were unable to recover it. 00:25:00.350 [2024-07-15 22:48:43.619270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.350 [2024-07-15 22:48:43.619418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.350 [2024-07-15 22:48:43.619444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.350 [2024-07-15 22:48:43.619458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.350 [2024-07-15 22:48:43.619472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.350 [2024-07-15 22:48:43.619500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.350 qpair failed and we were unable to recover it. 00:25:00.350 [2024-07-15 22:48:43.629242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.350 [2024-07-15 22:48:43.629401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.350 [2024-07-15 22:48:43.629426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.350 [2024-07-15 22:48:43.629440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.350 [2024-07-15 22:48:43.629454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.350 [2024-07-15 22:48:43.629481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.350 qpair failed and we were unable to recover it. 00:25:00.350 [2024-07-15 22:48:43.639296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.350 [2024-07-15 22:48:43.639447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.350 [2024-07-15 22:48:43.639472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.350 [2024-07-15 22:48:43.639486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.350 [2024-07-15 22:48:43.639499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.350 [2024-07-15 22:48:43.639530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.350 qpair failed and we were unable to recover it. 00:25:00.350 [2024-07-15 22:48:43.649308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.350 [2024-07-15 22:48:43.649456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.350 [2024-07-15 22:48:43.649482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.350 [2024-07-15 22:48:43.649497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.350 [2024-07-15 22:48:43.649510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.350 [2024-07-15 22:48:43.649537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.350 qpair failed and we were unable to recover it. 00:25:00.350 [2024-07-15 22:48:43.659339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.350 [2024-07-15 22:48:43.659492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.350 [2024-07-15 22:48:43.659517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.350 [2024-07-15 22:48:43.659541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.659556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.659584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.669353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.669503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.669528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.669543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.669556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.669584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.679423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.679567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.679593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.679608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.679621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.679649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.689466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.689671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.689697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.689711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.689724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.689752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.699980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.700143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.700169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.700183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.700197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.700225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.709547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.709717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.709742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.709757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.709770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.709797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.719576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.719731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.719756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.719770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.719784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.719811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.729621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.729782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.729807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.729821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.729835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.729862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.739597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.739806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.739831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.739846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.739859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.739893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.749578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.749726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.749751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.749771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.749785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.749815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.759647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.759834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.759860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.759875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.351 [2024-07-15 22:48:43.759896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.351 [2024-07-15 22:48:43.759924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.351 qpair failed and we were unable to recover it. 00:25:00.351 [2024-07-15 22:48:43.769644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.351 [2024-07-15 22:48:43.769790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.351 [2024-07-15 22:48:43.769815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.351 [2024-07-15 22:48:43.769830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.352 [2024-07-15 22:48:43.769843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.352 [2024-07-15 22:48:43.769871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.352 qpair failed and we were unable to recover it. 00:25:00.352 [2024-07-15 22:48:43.779718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.352 [2024-07-15 22:48:43.779905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.352 [2024-07-15 22:48:43.779932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.352 [2024-07-15 22:48:43.779946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.352 [2024-07-15 22:48:43.779960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.352 [2024-07-15 22:48:43.779988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.352 qpair failed and we were unable to recover it. 00:25:00.352 [2024-07-15 22:48:43.789722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.352 [2024-07-15 22:48:43.789896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.352 [2024-07-15 22:48:43.789921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.352 [2024-07-15 22:48:43.789935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.352 [2024-07-15 22:48:43.789948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.352 [2024-07-15 22:48:43.789976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.352 qpair failed and we were unable to recover it. 00:25:00.352 [2024-07-15 22:48:43.799695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.352 [2024-07-15 22:48:43.799842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.352 [2024-07-15 22:48:43.799867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.352 [2024-07-15 22:48:43.799887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.352 [2024-07-15 22:48:43.799902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.352 [2024-07-15 22:48:43.799930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.352 qpair failed and we were unable to recover it. 00:25:00.352 [2024-07-15 22:48:43.809739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.352 [2024-07-15 22:48:43.809906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.352 [2024-07-15 22:48:43.809932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.352 [2024-07-15 22:48:43.809946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.352 [2024-07-15 22:48:43.809959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.352 [2024-07-15 22:48:43.809987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.352 qpair failed and we were unable to recover it. 00:25:00.352 [2024-07-15 22:48:43.819847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.352 [2024-07-15 22:48:43.819998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.352 [2024-07-15 22:48:43.820023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.352 [2024-07-15 22:48:43.820038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.352 [2024-07-15 22:48:43.820051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.352 [2024-07-15 22:48:43.820079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.352 qpair failed and we were unable to recover it. 00:25:00.352 [2024-07-15 22:48:43.829794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.352 [2024-07-15 22:48:43.829965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.352 [2024-07-15 22:48:43.829992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.352 [2024-07-15 22:48:43.830006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.352 [2024-07-15 22:48:43.830019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.352 [2024-07-15 22:48:43.830047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.352 qpair failed and we were unable to recover it. 00:25:00.352 [2024-07-15 22:48:43.839846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.352 [2024-07-15 22:48:43.840008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.352 [2024-07-15 22:48:43.840034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.352 [2024-07-15 22:48:43.840055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.352 [2024-07-15 22:48:43.840069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.352 [2024-07-15 22:48:43.840097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.352 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 22:48:43.849889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.614 [2024-07-15 22:48:43.850070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.614 [2024-07-15 22:48:43.850096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.614 [2024-07-15 22:48:43.850110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.614 [2024-07-15 22:48:43.850123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.614 [2024-07-15 22:48:43.850151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 22:48:43.859890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.614 [2024-07-15 22:48:43.860053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.614 [2024-07-15 22:48:43.860078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.614 [2024-07-15 22:48:43.860093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.614 [2024-07-15 22:48:43.860106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.614 [2024-07-15 22:48:43.860135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 22:48:43.869919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.614 [2024-07-15 22:48:43.870067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.614 [2024-07-15 22:48:43.870091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.614 [2024-07-15 22:48:43.870106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.614 [2024-07-15 22:48:43.870119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.614 [2024-07-15 22:48:43.870147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 22:48:43.879948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.614 [2024-07-15 22:48:43.880101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.614 [2024-07-15 22:48:43.880126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.614 [2024-07-15 22:48:43.880141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.614 [2024-07-15 22:48:43.880153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.614 [2024-07-15 22:48:43.880181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 22:48:43.889956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.614 [2024-07-15 22:48:43.890104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.614 [2024-07-15 22:48:43.890128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.614 [2024-07-15 22:48:43.890143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.614 [2024-07-15 22:48:43.890156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.614 [2024-07-15 22:48:43.890184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 22:48:43.899981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.614 [2024-07-15 22:48:43.900124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.614 [2024-07-15 22:48:43.900149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.614 [2024-07-15 22:48:43.900163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.614 [2024-07-15 22:48:43.900176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.614 [2024-07-15 22:48:43.900204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 22:48:43.910030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.614 [2024-07-15 22:48:43.910180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.614 [2024-07-15 22:48:43.910205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.614 [2024-07-15 22:48:43.910219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.614 [2024-07-15 22:48:43.910232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.614 [2024-07-15 22:48:43.910260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 22:48:43.920041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.614 [2024-07-15 22:48:43.920189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.614 [2024-07-15 22:48:43.920214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.614 [2024-07-15 22:48:43.920228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.614 [2024-07-15 22:48:43.920241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:43.920269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:43.930080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:43.930229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:43.930260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:43.930275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:43.930288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:43.930316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:43.940138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:43.940288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:43.940313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:43.940328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:43.940341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:43.940369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:43.950175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:43.950325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:43.950350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:43.950365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:43.950378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:43.950405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:43.960164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:43.960309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:43.960334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:43.960348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:43.960361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:43.960389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:43.970192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:43.970334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:43.970359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:43.970373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:43.970386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:43.970414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:43.980249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:43.980443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:43.980468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:43.980482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:43.980495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:43.980523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:43.990248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:43.990413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:43.990438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:43.990453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:43.990466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:43.990493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:44.000337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:44.000508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:44.000535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:44.000550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:44.000563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:44.000592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:44.010345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:44.010489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:44.010515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:44.010530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:44.010543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:44.010571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:44.020314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:44.020472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:44.020502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:44.020518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:44.020531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:44.020559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:44.030343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:44.030493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:44.030518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:44.030533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:44.030546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:44.030574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:44.040365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:44.040512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:44.040537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:44.040551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:44.040564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:44.040593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:44.050426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:44.050574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:44.050600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:44.050615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:44.050628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:44.050656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:44.060471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:44.060670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.615 [2024-07-15 22:48:44.060695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.615 [2024-07-15 22:48:44.060709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.615 [2024-07-15 22:48:44.060722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.615 [2024-07-15 22:48:44.060757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 22:48:44.070472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.615 [2024-07-15 22:48:44.070624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.616 [2024-07-15 22:48:44.070649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.616 [2024-07-15 22:48:44.070664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.616 [2024-07-15 22:48:44.070677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.616 [2024-07-15 22:48:44.070705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 22:48:44.080529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.616 [2024-07-15 22:48:44.080677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.616 [2024-07-15 22:48:44.080703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.616 [2024-07-15 22:48:44.080717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.616 [2024-07-15 22:48:44.080731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.616 [2024-07-15 22:48:44.080759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 22:48:44.090517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.616 [2024-07-15 22:48:44.090664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.616 [2024-07-15 22:48:44.090690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.616 [2024-07-15 22:48:44.090705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.616 [2024-07-15 22:48:44.090718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.616 [2024-07-15 22:48:44.090746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 22:48:44.100545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.616 [2024-07-15 22:48:44.100702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.616 [2024-07-15 22:48:44.100727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.616 [2024-07-15 22:48:44.100742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.616 [2024-07-15 22:48:44.100755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.616 [2024-07-15 22:48:44.100783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 22:48:44.110596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.616 [2024-07-15 22:48:44.110773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.616 [2024-07-15 22:48:44.110803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.616 [2024-07-15 22:48:44.110818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.616 [2024-07-15 22:48:44.110831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.616 [2024-07-15 22:48:44.110859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.877 [2024-07-15 22:48:44.120625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.120777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.120803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.120818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.120831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.120859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.130633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.130788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.130813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.130828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.130841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.130869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.140653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.140797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.140822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.140836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.140849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.140887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.150689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.150835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.150860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.150881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.150896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.150931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.160717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.160901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.160938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.160958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.160973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.161003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.170730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.170883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.170918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.170933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.170946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.170975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.180770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.180913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.180939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.180954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.180966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.180994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.190804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.190958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.190983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.190998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.191011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.191039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.200813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.200974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.201005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.201020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.201033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.201062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.210851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.211024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.211050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.211064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.211077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.211105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.220867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.221029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.221054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.221068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.221081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.221109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.230924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.231072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.231097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.231112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.231125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.231152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.240942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.241122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.241147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.241161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.241174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.241208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.250989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.878 [2024-07-15 22:48:44.251158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.878 [2024-07-15 22:48:44.251183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.878 [2024-07-15 22:48:44.251197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.878 [2024-07-15 22:48:44.251210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.878 [2024-07-15 22:48:44.251238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.878 qpair failed and we were unable to recover it. 00:25:00.878 [2024-07-15 22:48:44.260977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.261124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.261150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.261164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.261177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.261205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.271033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.271186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.271212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.271226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.271239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.271267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.281053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.281201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.281226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.281240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.281253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.281281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.291095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.291278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.291309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.291324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.291337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.291365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.301146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.301318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.301345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.301359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.301376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.301406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.311144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.311299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.311325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.311340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.311353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.311381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.321153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.321295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.321320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.321334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.321347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.321375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.331216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.331359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.331384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.331399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.331420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.331449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.341202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.341351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.341375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.341389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.341402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.341430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.351324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.351469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.351494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.351508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.351523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.351552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.361305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.361458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.361484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.361498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.361511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.361539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:00.879 [2024-07-15 22:48:44.371347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.879 [2024-07-15 22:48:44.371520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.879 [2024-07-15 22:48:44.371547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.879 [2024-07-15 22:48:44.371562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.879 [2024-07-15 22:48:44.371575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:00.879 [2024-07-15 22:48:44.371604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.879 qpair failed and we were unable to recover it. 00:25:01.141 [2024-07-15 22:48:44.381390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.141 [2024-07-15 22:48:44.381576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.141 [2024-07-15 22:48:44.381601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.141 [2024-07-15 22:48:44.381615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.141 [2024-07-15 22:48:44.381629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.141 [2024-07-15 22:48:44.381657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.141 qpair failed and we were unable to recover it. 00:25:01.141 [2024-07-15 22:48:44.391402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.141 [2024-07-15 22:48:44.391577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.141 [2024-07-15 22:48:44.391602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.141 [2024-07-15 22:48:44.391617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.141 [2024-07-15 22:48:44.391630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.141 [2024-07-15 22:48:44.391657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.141 qpair failed and we were unable to recover it. 00:25:01.141 [2024-07-15 22:48:44.401388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.141 [2024-07-15 22:48:44.401538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.141 [2024-07-15 22:48:44.401562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.141 [2024-07-15 22:48:44.401577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.141 [2024-07-15 22:48:44.401589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.401617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.411482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.411683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.411709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.411723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.411735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.411763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.421450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.421606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.421632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.421646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.421665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.421695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.431495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.431649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.431674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.431689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.431702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.431730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.441520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.441664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.441689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.441704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.441717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.441745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.451528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.451720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.451745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.451760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.451773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.451801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.461543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.461687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.461713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.461727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.461740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.461768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.471576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.471758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.471783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.471798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.471811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.471839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.481615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.481765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.481789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.481803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.481815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.481843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.491645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.491793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.491819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.491833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.491846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.491874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.501661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.501804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.501829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.501844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.501856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.501894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.511711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.511872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.511906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.511920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.511939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.142 [2024-07-15 22:48:44.511968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.142 qpair failed and we were unable to recover it. 00:25:01.142 [2024-07-15 22:48:44.521755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.142 [2024-07-15 22:48:44.521937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.142 [2024-07-15 22:48:44.521964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.142 [2024-07-15 22:48:44.521978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.142 [2024-07-15 22:48:44.521995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.522026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.531741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.531915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.531941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.531956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.531969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.531998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.541790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.541972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.541998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.542013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.542027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.542056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.551838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.552003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.552028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.552043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.552056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.552084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.561834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.562028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.562053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.562067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.562081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.562109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.571918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.572089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.572115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.572129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.572142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.572170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.581901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.582054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.582079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.582093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.582106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.582135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.591983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.592135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.592161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.592175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.592189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.592217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.601972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.602143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.602168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.602188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.602202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.602231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.612012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.612197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.612222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.612236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.612249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.612277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.622003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.622160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.622184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.622199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.622212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.622239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.143 [2024-07-15 22:48:44.632045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.143 [2024-07-15 22:48:44.632193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.143 [2024-07-15 22:48:44.632217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.143 [2024-07-15 22:48:44.632232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.143 [2024-07-15 22:48:44.632245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.143 [2024-07-15 22:48:44.632273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.143 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.642086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.642231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.642256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.642270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.642283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.642311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.652121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.652269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.652295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.652310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.652323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.652350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.662107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.662252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.662277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.662291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.662304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.662332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.672175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.672336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.672361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.672375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.672388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.672416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.682189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.682349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.682374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.682388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.682401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.682429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.692244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.692393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.692420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.692445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.692459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.692488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.702263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.702434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.702460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.702474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.702487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.702516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.712327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.712479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.712504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.712519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.712532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.712560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.722290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.722442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.722467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.722481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.722494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.722521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.732350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.732533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.732558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.732572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.732585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.732613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.742375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.742532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.742561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.742579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.742592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.742622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.752403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.752598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.752623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.752638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.752651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.752679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.762436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.762616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.762642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.762658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.762671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.762699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.772445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.772594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.772619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.772634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.404 [2024-07-15 22:48:44.772647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.404 [2024-07-15 22:48:44.772676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.404 qpair failed and we were unable to recover it. 00:25:01.404 [2024-07-15 22:48:44.782457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.404 [2024-07-15 22:48:44.782600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.404 [2024-07-15 22:48:44.782625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.404 [2024-07-15 22:48:44.782646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.782660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.782688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.792517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.792694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.792719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.792733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.792747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.792775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.802515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.802705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.802730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.802745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.802758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.802786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.812564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.812739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.812766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.812781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.812798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.812828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.822569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.822715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.822741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.822756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.822769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.822799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.832606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.832800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.832826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.832841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.832854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.832889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.842621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.842766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.842792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.842806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.842819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.842847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.852636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.852792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.852817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.852832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.852845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.852872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.862681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.862873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.862905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.862919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.862932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.862960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.872719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.872868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.872903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.872920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.872933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.872961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.882769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.882929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.882954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.882969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.882982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.883011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.892778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.892945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.405 [2024-07-15 22:48:44.892971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.405 [2024-07-15 22:48:44.892985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.405 [2024-07-15 22:48:44.892998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.405 [2024-07-15 22:48:44.893026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.405 qpair failed and we were unable to recover it. 00:25:01.405 [2024-07-15 22:48:44.902791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.405 [2024-07-15 22:48:44.902945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.665 [2024-07-15 22:48:44.902969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.665 [2024-07-15 22:48:44.902984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.665 [2024-07-15 22:48:44.902998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.665 [2024-07-15 22:48:44.903027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-15 22:48:44.912825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.665 [2024-07-15 22:48:44.913009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.665 [2024-07-15 22:48:44.913035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.665 [2024-07-15 22:48:44.913050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.665 [2024-07-15 22:48:44.913063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.665 [2024-07-15 22:48:44.913091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-15 22:48:44.922850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.665 [2024-07-15 22:48:44.923043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.665 [2024-07-15 22:48:44.923068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.665 [2024-07-15 22:48:44.923082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.665 [2024-07-15 22:48:44.923095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.665 [2024-07-15 22:48:44.923124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-15 22:48:44.932855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.665 [2024-07-15 22:48:44.933011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:44.933036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:44.933051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:44.933063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:44.933092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:44.942920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:44.943104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:44.943130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:44.943144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:44.943157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:44.943185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:44.952933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:44.953082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:44.953107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:44.953121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:44.953134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:44.953163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:44.962988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:44.963141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:44.963171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:44.963187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:44.963200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:44.963229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:44.973002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:44.973186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:44.973212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:44.973226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:44.973240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:44.973267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:44.983005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:44.983150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:44.983174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:44.983189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:44.983202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:44.983229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:44.993046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:44.993194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:44.993218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:44.993233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:44.993246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:44.993273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:45.003145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:45.003336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:45.003361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:45.003375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:45.003388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:45.003422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:45.013095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:45.013236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:45.013261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:45.013275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:45.013288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:45.013316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:45.023173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:45.023323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:45.023348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:45.023362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:45.023376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:45.023404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:45.033177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:45.033329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:45.033353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:45.033367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:45.033380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:45.033408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:45.043213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:45.043398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:45.043423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:45.043437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:45.043450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:45.043478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:45.053235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:45.053400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:45.053430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:45.053446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:45.053459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:45.053487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:45.063326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:45.063505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:45.063530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:45.063544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:45.063556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:45.063585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-15 22:48:45.073279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.666 [2024-07-15 22:48:45.073433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.666 [2024-07-15 22:48:45.073459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.666 [2024-07-15 22:48:45.073473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.666 [2024-07-15 22:48:45.073486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.666 [2024-07-15 22:48:45.073514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-15 22:48:45.083297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.667 [2024-07-15 22:48:45.083472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.667 [2024-07-15 22:48:45.083497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.667 [2024-07-15 22:48:45.083512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.667 [2024-07-15 22:48:45.083525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.667 [2024-07-15 22:48:45.083554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-15 22:48:45.093315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.667 [2024-07-15 22:48:45.093459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.667 [2024-07-15 22:48:45.093485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.667 [2024-07-15 22:48:45.093499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.667 [2024-07-15 22:48:45.093512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.667 [2024-07-15 22:48:45.093550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-15 22:48:45.103430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.667 [2024-07-15 22:48:45.103577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.667 [2024-07-15 22:48:45.103603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.667 [2024-07-15 22:48:45.103618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.667 [2024-07-15 22:48:45.103631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.667 [2024-07-15 22:48:45.103659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-15 22:48:45.113388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.667 [2024-07-15 22:48:45.113539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.667 [2024-07-15 22:48:45.113563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.667 [2024-07-15 22:48:45.113578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.667 [2024-07-15 22:48:45.113591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.667 [2024-07-15 22:48:45.113618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-15 22:48:45.123398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.667 [2024-07-15 22:48:45.123565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.667 [2024-07-15 22:48:45.123590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.667 [2024-07-15 22:48:45.123604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.667 [2024-07-15 22:48:45.123618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.667 [2024-07-15 22:48:45.123645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-15 22:48:45.133423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.667 [2024-07-15 22:48:45.133590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.667 [2024-07-15 22:48:45.133615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.667 [2024-07-15 22:48:45.133630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.667 [2024-07-15 22:48:45.133643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.667 [2024-07-15 22:48:45.133670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-15 22:48:45.143472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.667 [2024-07-15 22:48:45.143621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.667 [2024-07-15 22:48:45.143651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.667 [2024-07-15 22:48:45.143666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.667 [2024-07-15 22:48:45.143679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.667 [2024-07-15 22:48:45.143707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-15 22:48:45.153489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.667 [2024-07-15 22:48:45.153638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.667 [2024-07-15 22:48:45.153663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.667 [2024-07-15 22:48:45.153677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.667 [2024-07-15 22:48:45.153690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.667 [2024-07-15 22:48:45.153717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-15 22:48:45.163535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.667 [2024-07-15 22:48:45.163686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.667 [2024-07-15 22:48:45.163711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.667 [2024-07-15 22:48:45.163726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.667 [2024-07-15 22:48:45.163739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.667 [2024-07-15 22:48:45.163767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.927 [2024-07-15 22:48:45.173536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.927 [2024-07-15 22:48:45.173681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.927 [2024-07-15 22:48:45.173706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.927 [2024-07-15 22:48:45.173721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.927 [2024-07-15 22:48:45.173733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.927 [2024-07-15 22:48:45.173761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.927 qpair failed and we were unable to recover it. 00:25:01.927 [2024-07-15 22:48:45.183584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.927 [2024-07-15 22:48:45.183724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.927 [2024-07-15 22:48:45.183749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.927 [2024-07-15 22:48:45.183764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.927 [2024-07-15 22:48:45.183777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.927 [2024-07-15 22:48:45.183810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.927 qpair failed and we were unable to recover it. 00:25:01.927 [2024-07-15 22:48:45.193618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.927 [2024-07-15 22:48:45.193797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.927 [2024-07-15 22:48:45.193822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.927 [2024-07-15 22:48:45.193836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.927 [2024-07-15 22:48:45.193849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.927 [2024-07-15 22:48:45.193884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.927 qpair failed and we were unable to recover it. 00:25:01.927 [2024-07-15 22:48:45.203622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.927 [2024-07-15 22:48:45.203805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.927 [2024-07-15 22:48:45.203830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.927 [2024-07-15 22:48:45.203844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.927 [2024-07-15 22:48:45.203857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.927 [2024-07-15 22:48:45.203892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.927 qpair failed and we were unable to recover it. 00:25:01.927 [2024-07-15 22:48:45.213676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.927 [2024-07-15 22:48:45.213827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.213852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.213867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.213886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.213916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.223659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.223797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.223822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.223836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.223849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.223884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.233757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.233924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.233954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.233970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.233983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.234011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.243742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.243934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.243959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.243974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.243987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.244016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.253773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.253967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.253992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.254006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.254020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.254048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.263810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.264002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.264027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.264042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.264055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.264083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.273846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.274038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.274063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.274078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.274096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.274124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.283868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.284050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.284076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.284090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.284103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.284131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.293882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.294031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.294056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.294071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.294084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.294112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.303909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.304056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.304081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.304096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.304109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.304138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.313941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.314093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.314117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.314132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.314146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.314173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.324008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.324176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.324201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.324215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.324228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.324256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.334033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.334186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.334211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.334226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.334239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.334266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.344022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.344171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.344196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.344210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.344223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.928 [2024-07-15 22:48:45.344252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.928 qpair failed and we were unable to recover it. 00:25:01.928 [2024-07-15 22:48:45.354091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.928 [2024-07-15 22:48:45.354239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.928 [2024-07-15 22:48:45.354264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.928 [2024-07-15 22:48:45.354279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.928 [2024-07-15 22:48:45.354292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.929 [2024-07-15 22:48:45.354320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.929 qpair failed and we were unable to recover it. 00:25:01.929 [2024-07-15 22:48:45.364149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.929 [2024-07-15 22:48:45.364348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.929 [2024-07-15 22:48:45.364373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.929 [2024-07-15 22:48:45.364387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.929 [2024-07-15 22:48:45.364405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.929 [2024-07-15 22:48:45.364434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.929 qpair failed and we were unable to recover it. 00:25:01.929 [2024-07-15 22:48:45.374105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.929 [2024-07-15 22:48:45.374258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.929 [2024-07-15 22:48:45.374284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.929 [2024-07-15 22:48:45.374298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.929 [2024-07-15 22:48:45.374311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.929 [2024-07-15 22:48:45.374339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.929 qpair failed and we were unable to recover it. 00:25:01.929 [2024-07-15 22:48:45.384111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.929 [2024-07-15 22:48:45.384252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.929 [2024-07-15 22:48:45.384277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.929 [2024-07-15 22:48:45.384291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.929 [2024-07-15 22:48:45.384304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.929 [2024-07-15 22:48:45.384332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.929 qpair failed and we were unable to recover it. 00:25:01.929 [2024-07-15 22:48:45.394150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.929 [2024-07-15 22:48:45.394302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.929 [2024-07-15 22:48:45.394327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.929 [2024-07-15 22:48:45.394341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.929 [2024-07-15 22:48:45.394354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.929 [2024-07-15 22:48:45.394382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.929 qpair failed and we were unable to recover it. 00:25:01.929 [2024-07-15 22:48:45.404180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.929 [2024-07-15 22:48:45.404335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.929 [2024-07-15 22:48:45.404360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.929 [2024-07-15 22:48:45.404375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.929 [2024-07-15 22:48:45.404388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.929 [2024-07-15 22:48:45.404416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.929 qpair failed and we were unable to recover it. 00:25:01.929 [2024-07-15 22:48:45.414216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.929 [2024-07-15 22:48:45.414365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.929 [2024-07-15 22:48:45.414390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.929 [2024-07-15 22:48:45.414405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.929 [2024-07-15 22:48:45.414417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.929 [2024-07-15 22:48:45.414444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.929 qpair failed and we were unable to recover it. 00:25:01.929 [2024-07-15 22:48:45.424265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.929 [2024-07-15 22:48:45.424448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.929 [2024-07-15 22:48:45.424474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.929 [2024-07-15 22:48:45.424488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.929 [2024-07-15 22:48:45.424501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:01.929 [2024-07-15 22:48:45.424530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.929 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.434275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.434429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.434454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.434468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.434481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.191 [2024-07-15 22:48:45.434509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.444281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.444431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.444456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.444470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.444483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.191 [2024-07-15 22:48:45.444511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.454364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.454552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.454578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.454592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.454611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.191 [2024-07-15 22:48:45.454639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.464367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.464513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.464539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.464553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.464566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.191 [2024-07-15 22:48:45.464594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.474398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.474582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.474609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.474628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.474642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.191 [2024-07-15 22:48:45.474671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.484394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.484565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.484589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.484603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.484615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.191 [2024-07-15 22:48:45.484643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.494452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.494599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.494624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.494639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.494652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.191 [2024-07-15 22:48:45.494681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.504490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.504643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.504669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.504683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.504696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.191 [2024-07-15 22:48:45.504724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.514553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.514702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.514727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.514741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.514754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.191 [2024-07-15 22:48:45.514782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.524505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.524653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.524679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.524693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.524706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.191 [2024-07-15 22:48:45.524734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-07-15 22:48:45.534573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.191 [2024-07-15 22:48:45.534763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.191 [2024-07-15 22:48:45.534788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.191 [2024-07-15 22:48:45.534803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.191 [2024-07-15 22:48:45.534816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.192 [2024-07-15 22:48:45.534844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.544558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.544721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.192 [2024-07-15 22:48:45.544746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.192 [2024-07-15 22:48:45.544766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.192 [2024-07-15 22:48:45.544781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.192 [2024-07-15 22:48:45.544809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.554613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.554766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.192 [2024-07-15 22:48:45.554792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.192 [2024-07-15 22:48:45.554806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.192 [2024-07-15 22:48:45.554819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.192 [2024-07-15 22:48:45.554848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.564607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.564755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.192 [2024-07-15 22:48:45.564780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.192 [2024-07-15 22:48:45.564794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.192 [2024-07-15 22:48:45.564807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.192 [2024-07-15 22:48:45.564835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.574690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.574867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.192 [2024-07-15 22:48:45.574898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.192 [2024-07-15 22:48:45.574913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.192 [2024-07-15 22:48:45.574926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.192 [2024-07-15 22:48:45.574954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.584684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.584833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.192 [2024-07-15 22:48:45.584858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.192 [2024-07-15 22:48:45.584873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.192 [2024-07-15 22:48:45.584895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd3f200 00:25:02.192 [2024-07-15 22:48:45.584926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.594736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.594893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.192 [2024-07-15 22:48:45.594926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.192 [2024-07-15 22:48:45.594943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.192 [2024-07-15 22:48:45.594956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.192 [2024-07-15 22:48:45.594987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.604776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.604954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.192 [2024-07-15 22:48:45.604983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.192 [2024-07-15 22:48:45.604998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.192 [2024-07-15 22:48:45.605012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.192 [2024-07-15 22:48:45.605042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.614827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.614986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.192 [2024-07-15 22:48:45.615013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.192 [2024-07-15 22:48:45.615028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.192 [2024-07-15 22:48:45.615041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.192 [2024-07-15 22:48:45.615071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.624846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.625026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.192 [2024-07-15 22:48:45.625053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.192 [2024-07-15 22:48:45.625068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.192 [2024-07-15 22:48:45.625081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.192 [2024-07-15 22:48:45.625113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.634846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.635017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.192 [2024-07-15 22:48:45.635043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.192 [2024-07-15 22:48:45.635064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.192 [2024-07-15 22:48:45.635078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.192 [2024-07-15 22:48:45.635108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-07-15 22:48:45.644940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.192 [2024-07-15 22:48:45.645091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.193 [2024-07-15 22:48:45.645117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.193 [2024-07-15 22:48:45.645132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.193 [2024-07-15 22:48:45.645145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.193 [2024-07-15 22:48:45.645175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-07-15 22:48:45.654949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.193 [2024-07-15 22:48:45.655157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.193 [2024-07-15 22:48:45.655184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.193 [2024-07-15 22:48:45.655198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.193 [2024-07-15 22:48:45.655211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.193 [2024-07-15 22:48:45.655242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-07-15 22:48:45.664908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.193 [2024-07-15 22:48:45.665079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.193 [2024-07-15 22:48:45.665104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.193 [2024-07-15 22:48:45.665119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.193 [2024-07-15 22:48:45.665132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.193 [2024-07-15 22:48:45.665162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-07-15 22:48:45.674953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.193 [2024-07-15 22:48:45.675119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.193 [2024-07-15 22:48:45.675145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.193 [2024-07-15 22:48:45.675160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.193 [2024-07-15 22:48:45.675180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.193 [2024-07-15 22:48:45.675210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-07-15 22:48:45.684965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.193 [2024-07-15 22:48:45.685112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.193 [2024-07-15 22:48:45.685140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.193 [2024-07-15 22:48:45.685155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.193 [2024-07-15 22:48:45.685168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.193 [2024-07-15 22:48:45.685199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.453 [2024-07-15 22:48:45.695008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.453 [2024-07-15 22:48:45.695202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.453 [2024-07-15 22:48:45.695229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.453 [2024-07-15 22:48:45.695244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.453 [2024-07-15 22:48:45.695257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.453 [2024-07-15 22:48:45.695287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.453 qpair failed and we were unable to recover it. 00:25:02.453 [2024-07-15 22:48:45.705009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.453 [2024-07-15 22:48:45.705159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.453 [2024-07-15 22:48:45.705185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.453 [2024-07-15 22:48:45.705200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.453 [2024-07-15 22:48:45.705213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.453 [2024-07-15 22:48:45.705244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.453 qpair failed and we were unable to recover it. 00:25:02.453 [2024-07-15 22:48:45.715172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.453 [2024-07-15 22:48:45.715332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.453 [2024-07-15 22:48:45.715359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.453 [2024-07-15 22:48:45.715373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.453 [2024-07-15 22:48:45.715386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.453 [2024-07-15 22:48:45.715415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.453 qpair failed and we were unable to recover it. 00:25:02.453 [2024-07-15 22:48:45.725131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.453 [2024-07-15 22:48:45.725291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.453 [2024-07-15 22:48:45.725323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.453 [2024-07-15 22:48:45.725342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.453 [2024-07-15 22:48:45.725356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.453 [2024-07-15 22:48:45.725387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.453 qpair failed and we were unable to recover it. 00:25:02.453 [2024-07-15 22:48:45.735180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.453 [2024-07-15 22:48:45.735351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.453 [2024-07-15 22:48:45.735378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.453 [2024-07-15 22:48:45.735392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.453 [2024-07-15 22:48:45.735405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.453 [2024-07-15 22:48:45.735435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.453 qpair failed and we were unable to recover it. 00:25:02.453 [2024-07-15 22:48:45.745176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.453 [2024-07-15 22:48:45.745326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.453 [2024-07-15 22:48:45.745351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.453 [2024-07-15 22:48:45.745366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.453 [2024-07-15 22:48:45.745379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.453 [2024-07-15 22:48:45.745409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.453 qpair failed and we were unable to recover it. 00:25:02.453 [2024-07-15 22:48:45.755208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.453 [2024-07-15 22:48:45.755361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.453 [2024-07-15 22:48:45.755387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.453 [2024-07-15 22:48:45.755402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.453 [2024-07-15 22:48:45.755415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.453 [2024-07-15 22:48:45.755446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.453 qpair failed and we were unable to recover it. 00:25:02.453 [2024-07-15 22:48:45.765195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.453 [2024-07-15 22:48:45.765345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.453 [2024-07-15 22:48:45.765371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.453 [2024-07-15 22:48:45.765385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.453 [2024-07-15 22:48:45.765399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.453 [2024-07-15 22:48:45.765450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.453 qpair failed and we were unable to recover it. 00:25:02.453 [2024-07-15 22:48:45.775245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.775387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.775412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.775427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.775440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.775470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.785225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.785384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.785410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.785425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.785437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.785467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.795322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.795492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.795518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.795533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.795546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.795575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.805320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.805469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.805494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.805509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.805523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.805553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.815362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.815505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.815537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.815553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.815566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.815609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.825356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.825499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.825525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.825539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.825553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.825582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.835376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.835522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.835548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.835562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.835576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.835605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.845450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.845596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.845622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.845637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.845650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.845679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.855449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.855595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.855621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.855635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.855653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.855685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.865463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.865625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.865650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.865664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.865677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.865707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.875505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.875664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.875689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.875704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.875718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.875748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.885518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.885664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.885690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.885704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.885717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.885748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.895584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.895774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.895799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.895814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.895827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.895857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.905566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.905760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.905786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.905801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.905814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.454 [2024-07-15 22:48:45.905844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.454 qpair failed and we were unable to recover it. 00:25:02.454 [2024-07-15 22:48:45.915631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.454 [2024-07-15 22:48:45.915807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.454 [2024-07-15 22:48:45.915836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.454 [2024-07-15 22:48:45.915851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.454 [2024-07-15 22:48:45.915865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.455 [2024-07-15 22:48:45.915907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.455 qpair failed and we were unable to recover it. 00:25:02.455 [2024-07-15 22:48:45.925665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.455 [2024-07-15 22:48:45.925847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.455 [2024-07-15 22:48:45.925873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.455 [2024-07-15 22:48:45.925901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.455 [2024-07-15 22:48:45.925916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.455 [2024-07-15 22:48:45.925947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.455 qpair failed and we were unable to recover it. 00:25:02.455 [2024-07-15 22:48:45.935679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.455 [2024-07-15 22:48:45.935836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.455 [2024-07-15 22:48:45.935863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.455 [2024-07-15 22:48:45.935895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.455 [2024-07-15 22:48:45.935912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.455 [2024-07-15 22:48:45.935944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.455 qpair failed and we were unable to recover it. 00:25:02.455 [2024-07-15 22:48:45.945680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.455 [2024-07-15 22:48:45.945826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.455 [2024-07-15 22:48:45.945853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.455 [2024-07-15 22:48:45.945867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.455 [2024-07-15 22:48:45.945895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.455 [2024-07-15 22:48:45.945928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.455 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-15 22:48:45.955728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.716 [2024-07-15 22:48:45.955902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.716 [2024-07-15 22:48:45.955928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.716 [2024-07-15 22:48:45.955943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.716 [2024-07-15 22:48:45.955956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.716 [2024-07-15 22:48:45.955987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-15 22:48:45.965784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.716 [2024-07-15 22:48:45.965976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.716 [2024-07-15 22:48:45.966002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.716 [2024-07-15 22:48:45.966016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.716 [2024-07-15 22:48:45.966030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.716 [2024-07-15 22:48:45.966059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-15 22:48:45.975771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.716 [2024-07-15 22:48:45.975921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.716 [2024-07-15 22:48:45.975946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.716 [2024-07-15 22:48:45.975961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.716 [2024-07-15 22:48:45.975975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.716 [2024-07-15 22:48:45.976005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-15 22:48:45.985811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.716 [2024-07-15 22:48:45.985962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.716 [2024-07-15 22:48:45.985988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.716 [2024-07-15 22:48:45.986003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.716 [2024-07-15 22:48:45.986017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.716 [2024-07-15 22:48:45.986046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-15 22:48:45.995843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.716 [2024-07-15 22:48:45.996009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.716 [2024-07-15 22:48:45.996035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.716 [2024-07-15 22:48:45.996050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.716 [2024-07-15 22:48:45.996063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.716 [2024-07-15 22:48:45.996093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-15 22:48:46.005860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.716 [2024-07-15 22:48:46.006019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.716 [2024-07-15 22:48:46.006045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.716 [2024-07-15 22:48:46.006059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.716 [2024-07-15 22:48:46.006072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.716 [2024-07-15 22:48:46.006102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-15 22:48:46.015901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.716 [2024-07-15 22:48:46.016048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.716 [2024-07-15 22:48:46.016074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.716 [2024-07-15 22:48:46.016089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.716 [2024-07-15 22:48:46.016102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.716 [2024-07-15 22:48:46.016131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.716 qpair failed and we were unable to recover it. 00:25:02.716 [2024-07-15 22:48:46.025915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.716 [2024-07-15 22:48:46.026066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.716 [2024-07-15 22:48:46.026092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.716 [2024-07-15 22:48:46.026107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.716 [2024-07-15 22:48:46.026120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.026150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.035967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.036116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.036141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.036161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.036175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.036204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.045978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.046122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.046158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.046172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.046185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.046215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.056029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.056179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.056205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.056219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.056232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.056262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.066090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.066242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.066268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.066283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.066297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.066327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.076062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.076215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.076240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.076255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.076268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.076298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.086133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.086296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.086321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.086336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.086349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.086378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.096115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.096266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.096291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.096305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.096319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.096349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.106149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.106287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.106313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.106328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.106341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.106370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.116198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.116343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.116368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.116382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.116395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.116424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.126197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.126340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.126373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.126388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.126402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.126431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.136223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.136388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.136413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.136427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.136440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.136470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.717 qpair failed and we were unable to recover it. 00:25:02.717 [2024-07-15 22:48:46.146359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.717 [2024-07-15 22:48:46.146514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.717 [2024-07-15 22:48:46.146541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.717 [2024-07-15 22:48:46.146555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.717 [2024-07-15 22:48:46.146568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.717 [2024-07-15 22:48:46.146599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.718 qpair failed and we were unable to recover it. 00:25:02.718 [2024-07-15 22:48:46.156345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.718 [2024-07-15 22:48:46.156500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.718 [2024-07-15 22:48:46.156526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.718 [2024-07-15 22:48:46.156540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.718 [2024-07-15 22:48:46.156553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.718 [2024-07-15 22:48:46.156582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.718 qpair failed and we were unable to recover it. 00:25:02.718 [2024-07-15 22:48:46.166329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.718 [2024-07-15 22:48:46.166493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.718 [2024-07-15 22:48:46.166518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.718 [2024-07-15 22:48:46.166533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.718 [2024-07-15 22:48:46.166546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.718 [2024-07-15 22:48:46.166582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.718 qpair failed and we were unable to recover it. 00:25:02.718 [2024-07-15 22:48:46.176367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.718 [2024-07-15 22:48:46.176551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.718 [2024-07-15 22:48:46.176576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.718 [2024-07-15 22:48:46.176591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.718 [2024-07-15 22:48:46.176604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.718 [2024-07-15 22:48:46.176635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.718 qpair failed and we were unable to recover it. 00:25:02.718 [2024-07-15 22:48:46.186376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.718 [2024-07-15 22:48:46.186519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.718 [2024-07-15 22:48:46.186546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.718 [2024-07-15 22:48:46.186560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.718 [2024-07-15 22:48:46.186573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.718 [2024-07-15 22:48:46.186604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.718 qpair failed and we were unable to recover it. 00:25:02.718 [2024-07-15 22:48:46.196399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.718 [2024-07-15 22:48:46.196545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.718 [2024-07-15 22:48:46.196570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.718 [2024-07-15 22:48:46.196585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.718 [2024-07-15 22:48:46.196598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.718 [2024-07-15 22:48:46.196628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.718 qpair failed and we were unable to recover it. 00:25:02.718 [2024-07-15 22:48:46.206422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.718 [2024-07-15 22:48:46.206566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.718 [2024-07-15 22:48:46.206591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.718 [2024-07-15 22:48:46.206606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.718 [2024-07-15 22:48:46.206619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.718 [2024-07-15 22:48:46.206648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.718 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-15 22:48:46.216511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.978 [2024-07-15 22:48:46.216695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.978 [2024-07-15 22:48:46.216726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.978 [2024-07-15 22:48:46.216742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.978 [2024-07-15 22:48:46.216755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.978 [2024-07-15 22:48:46.216784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-15 22:48:46.226484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.978 [2024-07-15 22:48:46.226677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.978 [2024-07-15 22:48:46.226704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.978 [2024-07-15 22:48:46.226720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.978 [2024-07-15 22:48:46.226733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.978 [2024-07-15 22:48:46.226764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-15 22:48:46.236519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.978 [2024-07-15 22:48:46.236685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.978 [2024-07-15 22:48:46.236711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.978 [2024-07-15 22:48:46.236729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.978 [2024-07-15 22:48:46.236750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.978 [2024-07-15 22:48:46.236783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-15 22:48:46.246567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.978 [2024-07-15 22:48:46.246737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.978 [2024-07-15 22:48:46.246763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.978 [2024-07-15 22:48:46.246777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.246790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.246820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.256559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.256703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.256729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.256744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.256757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.256792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.266643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.266847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.266874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.266903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.266918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.266965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.276671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.276820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.276846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.276861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.276874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.276912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.286684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.286832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.286857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.286872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.286894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.286925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.296672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.296826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.296851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.296866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.296886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.296917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.306705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.306855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.306887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.306903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.306917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.306947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.316727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.316885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.316911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.316926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.316940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.316969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.326775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.326925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.326950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.326965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.326979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.327022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.336831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.337021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.337047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.337061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.337073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.337103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.346847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.347001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.347026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.347041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.347060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.347092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.356842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.357017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.357043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.357057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.357070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.979 [2024-07-15 22:48:46.357100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-15 22:48:46.366914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.979 [2024-07-15 22:48:46.367066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.979 [2024-07-15 22:48:46.367091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.979 [2024-07-15 22:48:46.367106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.979 [2024-07-15 22:48:46.367119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.367150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.376892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.377039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.377064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.377079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.377092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.377121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.386936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.387096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.387121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.387136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.387149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.387179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.396957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.397139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.397165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.397180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.397193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.397223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.407030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.407220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.407246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.407260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.407273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.407303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.417051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.417239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.417264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.417279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.417291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.417320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.427053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.427197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.427222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.427236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.427250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.427279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.437113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.437271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.437296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.437317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.437331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.437376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.447100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.447290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.447316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.447331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.447344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.447374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.457132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.457291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.457317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.457333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.457346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.457378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.467191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.467337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.467362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.467377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.467390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.467421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-15 22:48:46.477192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.980 [2024-07-15 22:48:46.477342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.980 [2024-07-15 22:48:46.477367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.980 [2024-07-15 22:48:46.477381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.980 [2024-07-15 22:48:46.477394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:02.980 [2024-07-15 22:48:46.477424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.980 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.487223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.487375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.487398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.487412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.487424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.487453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.497215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.497355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.497381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.497395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.497409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.497439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.507302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.507458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.507483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.507498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.507511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.507541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.517336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.517535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.517561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.517575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.517588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.517618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.527352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.527499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.527523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.527544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.527559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.527588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.537361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.537517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.537545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.537560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.537573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.537603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.547404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.547606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.547632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.547647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.547660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.547689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.557451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.557624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.557649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.557663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.557676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.557707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.567504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.567717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.567744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.567760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.567777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.567810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.577499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.577650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.577676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.577690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.577704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.577734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.587492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.241 [2024-07-15 22:48:46.587644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.241 [2024-07-15 22:48:46.587670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.241 [2024-07-15 22:48:46.587685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.241 [2024-07-15 22:48:46.587698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.241 [2024-07-15 22:48:46.587729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.241 qpair failed and we were unable to recover it. 00:25:03.241 [2024-07-15 22:48:46.597529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.597712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.597738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.597752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.597765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.597795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.607581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.607788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.607813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.607827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.607841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.607870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.617600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.617749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.617780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.617795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.617808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.617837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.627603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.627747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.627772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.627786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.627799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.627829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.637650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.637798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.637824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.637838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.637851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.637886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.647657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.647805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.647831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.647845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.647858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.647899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.657711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.657854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.657886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.657902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.657915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.657951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.667719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.667870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.667904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.667923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.667937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.667966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.677768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.677929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.677954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.677969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.677982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.678014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.687749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.687916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.687942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.687956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.687969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.687999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.697845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.698005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.698031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.698046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.698059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.698088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.707822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.707978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.708009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.708024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.708038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.708068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.717872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.718034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.718059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.718074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.718087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.718116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.727883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.728052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.242 [2024-07-15 22:48:46.728078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.242 [2024-07-15 22:48:46.728092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.242 [2024-07-15 22:48:46.728106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.242 [2024-07-15 22:48:46.728135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.242 qpair failed and we were unable to recover it. 00:25:03.242 [2024-07-15 22:48:46.737939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.242 [2024-07-15 22:48:46.738082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.243 [2024-07-15 22:48:46.738108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.243 [2024-07-15 22:48:46.738122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.243 [2024-07-15 22:48:46.738136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.243 [2024-07-15 22:48:46.738179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.243 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.747965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.748116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.748141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.748156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.748175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.748206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.758069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.758231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.758259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.758278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.758291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.758323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.768007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.768164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.768190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.768205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.768218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.768249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.778039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.778219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.778245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.778259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.778272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.778302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.788060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.788212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.788237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.788252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.788266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.788295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.798099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.798302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.798326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.798341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.798355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.798384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.808131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.808325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.808351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.808371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.808385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.808415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.818156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.818301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.818327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.818342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.818355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.818384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.828221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.828398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.828423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.828438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.828451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.828480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.838214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.838389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.838414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.838435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.838448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.838478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.848215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.848355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.848380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.848395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.848408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.848438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.858283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.858425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.858450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.858464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.858477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.858507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.868326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.868472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.868497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.868512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.868525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.868554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.878350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.502 [2024-07-15 22:48:46.878505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.502 [2024-07-15 22:48:46.878531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.502 [2024-07-15 22:48:46.878545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.502 [2024-07-15 22:48:46.878562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.502 [2024-07-15 22:48:46.878591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.502 qpair failed and we were unable to recover it. 00:25:03.502 [2024-07-15 22:48:46.888362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.888547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.888572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.888586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.888600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.888630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.898387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.898531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.898556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.898570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.898583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.898613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.908391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.908536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.908561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.908575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.908588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.908617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.918441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.918595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.918621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.918635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.918648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.918678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.928482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.928639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.928663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.928687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.928701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.928731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.938475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.938622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.938647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.938662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.938675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.938705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.948521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.948674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.948700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.948714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.948730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.948762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.958553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.958701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.958727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.958742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.958755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.958798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.968588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.968734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.968759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.968774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.968787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.968818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.978601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.978749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.978775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.978789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.978803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.978832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.988622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.988761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.988787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.988801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.988815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.988845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.503 [2024-07-15 22:48:46.998727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.503 [2024-07-15 22:48:46.998891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.503 [2024-07-15 22:48:46.998917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.503 [2024-07-15 22:48:46.998932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.503 [2024-07-15 22:48:46.998945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.503 [2024-07-15 22:48:46.998975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.503 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.008704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.008850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.008882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.008900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.008914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.008944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.018712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.018912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.018945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.018960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.018974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.019016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.028736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.028889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.028915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.028929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.028943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.028987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.038815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.039009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.039036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.039051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.039064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.039093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.048802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.048972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.048997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.049012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.049025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.049055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.058829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.058979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.059005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.059020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.059034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.059083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.068903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.069066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.069092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.069107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.069120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.069164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.078904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.079075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.079101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.079116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.079129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.079159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.088919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.089120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.089146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.089161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.089174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.089205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.098935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.099084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.099110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.099124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.099137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.099167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.108964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.109118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.109148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.109163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.109176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.109206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.119034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.119193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.119218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.119233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.119246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.119277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.129021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.763 [2024-07-15 22:48:47.129168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.763 [2024-07-15 22:48:47.129194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.763 [2024-07-15 22:48:47.129208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.763 [2024-07-15 22:48:47.129221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.763 [2024-07-15 22:48:47.129252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-15 22:48:47.139075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.139235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.139260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.139274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.139287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.139317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.149063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.149232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.149257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.149272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.149291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.149321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.159119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.159268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.159292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.159307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.159321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.159350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.169132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.169276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.169301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.169316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.169329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.169358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.179170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.179339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.179365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.179380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.179393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.179422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.189208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.189348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.189372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.189386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.189399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.189429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.199265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.199422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.199447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.199462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.199475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.199504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.209246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.209395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.209420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.209435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.209448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.209477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.219330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.219478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.219503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.219517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.219530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.219560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.229287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.229432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.229457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.229471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.229484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.229513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.239387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.239555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.239579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.239594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.239612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.239642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.249367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.249510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.249535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.249549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.249562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.249594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-15 22:48:47.259401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.764 [2024-07-15 22:48:47.259546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.764 [2024-07-15 22:48:47.259571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.764 [2024-07-15 22:48:47.259585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.764 [2024-07-15 22:48:47.259598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:03.764 [2024-07-15 22:48:47.259628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.764 qpair failed and we were unable to recover it. 00:25:04.024 [2024-07-15 22:48:47.269441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.024 [2024-07-15 22:48:47.269591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.024 [2024-07-15 22:48:47.269617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.024 [2024-07-15 22:48:47.269632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.024 [2024-07-15 22:48:47.269645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.024 [2024-07-15 22:48:47.269674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.024 qpair failed and we were unable to recover it. 00:25:04.024 [2024-07-15 22:48:47.279451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.024 [2024-07-15 22:48:47.279605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.024 [2024-07-15 22:48:47.279631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.024 [2024-07-15 22:48:47.279647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.024 [2024-07-15 22:48:47.279660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.024 [2024-07-15 22:48:47.279690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.024 qpair failed and we were unable to recover it. 00:25:04.024 [2024-07-15 22:48:47.289539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.024 [2024-07-15 22:48:47.289719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.024 [2024-07-15 22:48:47.289745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.024 [2024-07-15 22:48:47.289759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.024 [2024-07-15 22:48:47.289772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.024 [2024-07-15 22:48:47.289802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.024 qpair failed and we were unable to recover it. 00:25:04.024 [2024-07-15 22:48:47.299504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.024 [2024-07-15 22:48:47.299650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.024 [2024-07-15 22:48:47.299676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.024 [2024-07-15 22:48:47.299690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.024 [2024-07-15 22:48:47.299704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.024 [2024-07-15 22:48:47.299733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.024 qpair failed and we were unable to recover it. 00:25:04.024 [2024-07-15 22:48:47.309560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.024 [2024-07-15 22:48:47.309706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.024 [2024-07-15 22:48:47.309732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.024 [2024-07-15 22:48:47.309746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.024 [2024-07-15 22:48:47.309759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.024 [2024-07-15 22:48:47.309790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.024 qpair failed and we were unable to recover it. 00:25:04.024 [2024-07-15 22:48:47.319559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.024 [2024-07-15 22:48:47.319707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.024 [2024-07-15 22:48:47.319732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.024 [2024-07-15 22:48:47.319747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.024 [2024-07-15 22:48:47.319760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.024 [2024-07-15 22:48:47.319789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.024 qpair failed and we were unable to recover it. 00:25:04.024 [2024-07-15 22:48:47.329625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.024 [2024-07-15 22:48:47.329775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.024 [2024-07-15 22:48:47.329801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.024 [2024-07-15 22:48:47.329821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.024 [2024-07-15 22:48:47.329835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.024 [2024-07-15 22:48:47.329866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.024 qpair failed and we were unable to recover it. 00:25:04.024 [2024-07-15 22:48:47.339615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.024 [2024-07-15 22:48:47.339800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.024 [2024-07-15 22:48:47.339825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.024 [2024-07-15 22:48:47.339840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.024 [2024-07-15 22:48:47.339853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.024 [2024-07-15 22:48:47.339889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.024 qpair failed and we were unable to recover it. 00:25:04.024 [2024-07-15 22:48:47.349637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.024 [2024-07-15 22:48:47.349782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.024 [2024-07-15 22:48:47.349807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.024 [2024-07-15 22:48:47.349822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.024 [2024-07-15 22:48:47.349835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.024 [2024-07-15 22:48:47.349864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.024 qpair failed and we were unable to recover it. 00:25:04.024 [2024-07-15 22:48:47.359671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.024 [2024-07-15 22:48:47.359840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.024 [2024-07-15 22:48:47.359865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.359888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.359903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.359933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.369726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.025 [2024-07-15 22:48:47.369906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.025 [2024-07-15 22:48:47.369932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.369947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.369961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.369991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.379748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.025 [2024-07-15 22:48:47.379954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.025 [2024-07-15 22:48:47.379982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.380000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.380014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.380044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.389819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.025 [2024-07-15 22:48:47.390012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.025 [2024-07-15 22:48:47.390038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.390053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.390066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.390097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.399814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.025 [2024-07-15 22:48:47.399970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.025 [2024-07-15 22:48:47.399997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.400011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.400024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.400055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.409810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.025 [2024-07-15 22:48:47.409974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.025 [2024-07-15 22:48:47.410000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.410015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.410028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.410059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.419896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.025 [2024-07-15 22:48:47.420067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.025 [2024-07-15 22:48:47.420101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.420119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.420132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.420162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.429913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.025 [2024-07-15 22:48:47.430093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.025 [2024-07-15 22:48:47.430119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.430134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.430147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.430177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.439922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.025 [2024-07-15 22:48:47.440080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.025 [2024-07-15 22:48:47.440106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.440121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.440134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.440163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.449983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.025 [2024-07-15 22:48:47.450169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.025 [2024-07-15 22:48:47.450195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.450209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.450223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.450253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.459946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.025 [2024-07-15 22:48:47.460094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.025 [2024-07-15 22:48:47.460119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.025 [2024-07-15 22:48:47.460134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.025 [2024-07-15 22:48:47.460147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.025 [2024-07-15 22:48:47.460183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.025 qpair failed and we were unable to recover it. 00:25:04.025 [2024-07-15 22:48:47.470037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.026 [2024-07-15 22:48:47.470182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.026 [2024-07-15 22:48:47.470207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.026 [2024-07-15 22:48:47.470222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.026 [2024-07-15 22:48:47.470235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.026 [2024-07-15 22:48:47.470264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.026 qpair failed and we were unable to recover it. 00:25:04.026 [2024-07-15 22:48:47.480026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.026 [2024-07-15 22:48:47.480180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.026 [2024-07-15 22:48:47.480206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.026 [2024-07-15 22:48:47.480220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.026 [2024-07-15 22:48:47.480233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.026 [2024-07-15 22:48:47.480263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.026 qpair failed and we were unable to recover it. 00:25:04.026 [2024-07-15 22:48:47.490071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.026 [2024-07-15 22:48:47.490218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.026 [2024-07-15 22:48:47.490242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.026 [2024-07-15 22:48:47.490256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.026 [2024-07-15 22:48:47.490268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.026 [2024-07-15 22:48:47.490296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.026 qpair failed and we were unable to recover it. 00:25:04.026 [2024-07-15 22:48:47.500089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.026 [2024-07-15 22:48:47.500234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.026 [2024-07-15 22:48:47.500260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.026 [2024-07-15 22:48:47.500281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.026 [2024-07-15 22:48:47.500295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.026 [2024-07-15 22:48:47.500326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.026 qpair failed and we were unable to recover it. 00:25:04.026 [2024-07-15 22:48:47.510130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.026 [2024-07-15 22:48:47.510275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.026 [2024-07-15 22:48:47.510306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.026 [2024-07-15 22:48:47.510321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.026 [2024-07-15 22:48:47.510334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.026 [2024-07-15 22:48:47.510364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.026 qpair failed and we were unable to recover it. 00:25:04.026 [2024-07-15 22:48:47.520163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.026 [2024-07-15 22:48:47.520314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.026 [2024-07-15 22:48:47.520339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.026 [2024-07-15 22:48:47.520354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.026 [2024-07-15 22:48:47.520367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.026 [2024-07-15 22:48:47.520396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.026 qpair failed and we were unable to recover it. 00:25:04.286 [2024-07-15 22:48:47.530162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.286 [2024-07-15 22:48:47.530314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.286 [2024-07-15 22:48:47.530340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.286 [2024-07-15 22:48:47.530354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.286 [2024-07-15 22:48:47.530368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.286 [2024-07-15 22:48:47.530397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.286 qpair failed and we were unable to recover it. 00:25:04.286 [2024-07-15 22:48:47.540201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.286 [2024-07-15 22:48:47.540344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.286 [2024-07-15 22:48:47.540370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.287 [2024-07-15 22:48:47.540385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.287 [2024-07-15 22:48:47.540398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.287 [2024-07-15 22:48:47.540428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.287 qpair failed and we were unable to recover it. 00:25:04.287 [2024-07-15 22:48:47.550263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.287 [2024-07-15 22:48:47.550406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.287 [2024-07-15 22:48:47.550431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.287 [2024-07-15 22:48:47.550446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.287 [2024-07-15 22:48:47.550465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.287 [2024-07-15 22:48:47.550509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.287 qpair failed and we were unable to recover it. 00:25:04.287 [2024-07-15 22:48:47.560258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.287 [2024-07-15 22:48:47.560422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.287 [2024-07-15 22:48:47.560448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.287 [2024-07-15 22:48:47.560463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.287 [2024-07-15 22:48:47.560476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd660000b90 00:25:04.287 [2024-07-15 22:48:47.560505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.287 qpair failed and we were unable to recover it. 00:25:04.287 Controller properly reset. 00:25:05.224 Initializing NVMe Controllers 00:25:05.224 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:05.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:05.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:05.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:05.224 Initialization complete. Launching workers. 00:25:05.224 Starting thread on core 1 00:25:05.224 Starting thread on core 2 00:25:05.224 Starting thread on core 3 00:25:05.224 Starting thread on core 0 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:05.224 00:25:05.224 real 0m10.626s 00:25:05.224 user 0m18.818s 00:25:05.224 sys 0m6.155s 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.224 ************************************ 00:25:05.224 END TEST nvmf_target_disconnect_tc2 00:25:05.224 ************************************ 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:05.224 rmmod nvme_tcp 00:25:05.224 rmmod nvme_fabrics 00:25:05.224 rmmod nvme_keyring 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1364828 ']' 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1364828 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@942 -- # '[' -z 1364828 ']' 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # kill -0 1364828 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # uname 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1364828 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # process_name=reactor_4 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' reactor_4 = sudo ']' 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1364828' 00:25:05.224 killing process with pid 1364828 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@961 -- # kill 1364828 00:25:05.224 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # wait 1364828 00:25:05.482 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:05.482 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:05.482 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:05.482 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:05.482 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:05.482 22:48:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.482 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.482 22:48:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.013 22:48:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:08.013 00:25:08.013 real 0m15.355s 00:25:08.013 user 0m43.992s 00:25:08.013 sys 0m8.199s 00:25:08.013 22:48:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:08.013 22:48:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:08.013 ************************************ 00:25:08.013 END TEST nvmf_target_disconnect 00:25:08.013 ************************************ 00:25:08.013 22:48:51 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:25:08.013 22:48:51 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:08.013 22:48:51 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.013 22:48:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.013 22:48:51 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:08.013 00:25:08.013 real 19m42.217s 00:25:08.013 user 46m53.230s 00:25:08.013 sys 4m52.634s 00:25:08.013 22:48:51 nvmf_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:08.013 22:48:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.013 ************************************ 00:25:08.013 END TEST nvmf_tcp 00:25:08.013 ************************************ 00:25:08.013 22:48:51 -- common/autotest_common.sh@1136 -- # return 0 00:25:08.013 22:48:51 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:08.013 22:48:51 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:08.013 22:48:51 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:25:08.013 22:48:51 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:08.013 22:48:51 -- common/autotest_common.sh@10 -- # set +x 00:25:08.013 ************************************ 00:25:08.013 START TEST spdkcli_nvmf_tcp 00:25:08.013 ************************************ 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:08.013 * Looking for test storage... 00:25:08.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1366030 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1366030 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@823 -- # '[' -z 1366030 ']' 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.013 [2024-07-15 22:48:51.199830] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:25:08.013 [2024-07-15 22:48:51.199935] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366030 ] 00:25:08.013 [2024-07-15 22:48:51.256307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:08.013 [2024-07-15 22:48:51.362603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.013 [2024-07-15 22:48:51.362610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # return 0 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.013 22:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:08.013 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:08.013 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:08.013 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:08.013 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:08.013 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:08.013 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:08.013 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:08.013 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:08.013 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:08.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:08.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:08.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:08.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:08.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:08.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:08.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:08.014 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:08.014 ' 00:25:10.543 [2024-07-15 22:48:54.040260] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.921 [2024-07-15 22:48:55.276570] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:14.458 [2024-07-15 22:48:57.559933] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:16.368 [2024-07-15 22:48:59.526150] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:17.781 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:17.781 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:17.781 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:17.781 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:17.781 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:17.781 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:17.781 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:17.781 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:17.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:17.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:17.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:17.781 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:17.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:17.782 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:17.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:17.782 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:17.782 22:49:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:17.782 22:49:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:17.782 22:49:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:17.782 22:49:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:17.782 22:49:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:17.782 22:49:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:17.782 22:49:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:17.782 22:49:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:18.351 22:49:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:18.351 22:49:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:18.351 22:49:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:18.351 22:49:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.351 22:49:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:18.351 22:49:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:18.351 22:49:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:18.351 22:49:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:18.351 22:49:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:18.351 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:18.351 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:18.351 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:18.351 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:18.351 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:18.351 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:18.351 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:18.351 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:18.351 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:18.351 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:18.351 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:18.351 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:18.351 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:18.351 ' 00:25:23.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:23.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:23.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:23.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:23.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:23.661 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:23.661 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:23.661 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:23.661 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:23.661 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:23.661 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:23.661 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:23.661 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:23.661 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1366030 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@942 -- # '[' -z 1366030 ']' 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # kill -0 1366030 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # uname 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1366030 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1366030' 00:25:23.661 killing process with pid 1366030 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@961 -- # kill 1366030 00:25:23.661 22:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # wait 1366030 00:25:23.920 22:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:23.920 22:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:23.920 22:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1366030 ']' 00:25:23.920 22:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1366030 00:25:23.920 22:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@942 -- # '[' -z 1366030 ']' 00:25:23.920 22:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # kill -0 1366030 00:25:23.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1366030) - No such process 00:25:23.920 22:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # echo 'Process with pid 1366030 is not found' 00:25:23.920 Process with pid 1366030 is not found 00:25:23.920 22:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:23.920 22:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:23.920 22:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:23.920 00:25:23.920 real 0m16.127s 00:25:23.920 user 0m34.101s 00:25:23.921 sys 0m0.797s 00:25:23.921 22:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:23.921 22:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.921 ************************************ 00:25:23.921 END TEST spdkcli_nvmf_tcp 00:25:23.921 ************************************ 00:25:23.921 22:49:07 -- common/autotest_common.sh@1136 -- # return 0 00:25:23.921 22:49:07 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:23.921 22:49:07 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:25:23.921 22:49:07 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:23.921 22:49:07 -- common/autotest_common.sh@10 -- # set +x 00:25:23.921 ************************************ 00:25:23.921 START TEST nvmf_identify_passthru 00:25:23.921 ************************************ 00:25:23.921 22:49:07 nvmf_identify_passthru -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:23.921 * Looking for test storage... 00:25:23.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:23.921 22:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.921 22:49:07 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.921 22:49:07 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.921 22:49:07 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.921 22:49:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.921 22:49:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.921 22:49:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.921 22:49:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:23.921 22:49:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:23.921 22:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.921 22:49:07 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.921 22:49:07 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.921 22:49:07 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.921 22:49:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.921 22:49:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.921 22:49:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.921 22:49:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:23.921 22:49:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.921 22:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.921 22:49:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:23.921 22:49:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:23.921 22:49:07 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:23.921 22:49:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:25.827 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:25.827 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:25.827 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:25.827 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.827 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:25.828 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.828 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.828 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:25.828 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:25.828 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.828 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.086 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.086 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.086 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:26.086 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.086 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.086 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.086 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:26.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:25:26.086 00:25:26.086 --- 10.0.0.2 ping statistics --- 00:25:26.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.086 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:25:26.086 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:25:26.086 00:25:26.086 --- 10.0.0.1 ping statistics --- 00:25:26.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.087 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:25:26.087 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.087 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:26.087 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:26.087 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.087 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:26.087 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:26.087 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.087 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:26.087 22:49:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:26.087 22:49:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:26.087 22:49:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # bdfs=() 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # local bdfs 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:88:00.0 00:25:26.087 22:49:09 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # echo 0000:88:00.0 00:25:26.087 22:49:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:26.087 22:49:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:26.087 22:49:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:26.087 22:49:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:26.087 22:49:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:30.279 22:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:30.279 22:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:30.279 22:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:30.279 22:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:34.473 22:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:34.473 22:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:34.473 22:49:17 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.473 22:49:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:34.736 22:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:34.736 22:49:17 nvmf_identify_passthru -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:34.736 22:49:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:34.736 22:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1370652 00:25:34.736 22:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:34.736 22:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.736 22:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1370652 00:25:34.736 22:49:17 nvmf_identify_passthru -- common/autotest_common.sh@823 -- # '[' -z 1370652 ']' 00:25:34.736 22:49:17 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.736 22:49:17 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:34.736 22:49:17 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.736 22:49:17 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:34.736 22:49:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:34.736 [2024-07-15 22:49:18.033766] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:25:34.736 [2024-07-15 22:49:18.033854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.736 [2024-07-15 22:49:18.098526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.736 [2024-07-15 22:49:18.208890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.736 [2024-07-15 22:49:18.208966] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.736 [2024-07-15 22:49:18.208980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.736 [2024-07-15 22:49:18.208999] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.736 [2024-07-15 22:49:18.209009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.736 [2024-07-15 22:49:18.209070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.736 [2024-07-15 22:49:18.209130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.736 [2024-07-15 22:49:18.209195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.736 [2024-07-15 22:49:18.209198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # return 0 00:25:34.997 22:49:18 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:34.997 INFO: Log level set to 20 00:25:34.997 INFO: Requests: 00:25:34.997 { 00:25:34.997 "jsonrpc": "2.0", 00:25:34.997 "method": "nvmf_set_config", 00:25:34.997 "id": 1, 00:25:34.997 "params": { 00:25:34.997 "admin_cmd_passthru": { 00:25:34.997 "identify_ctrlr": true 00:25:34.997 } 00:25:34.997 } 00:25:34.997 } 00:25:34.997 00:25:34.997 INFO: response: 00:25:34.997 { 00:25:34.997 "jsonrpc": "2.0", 00:25:34.997 "id": 1, 00:25:34.997 "result": true 00:25:34.997 } 00:25:34.997 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.997 22:49:18 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:34.997 INFO: Setting log level to 20 00:25:34.997 INFO: Setting log level to 20 00:25:34.997 INFO: Log level set to 20 00:25:34.997 INFO: Log level set to 20 00:25:34.997 INFO: Requests: 00:25:34.997 { 00:25:34.997 "jsonrpc": "2.0", 00:25:34.997 "method": "framework_start_init", 00:25:34.997 "id": 1 00:25:34.997 } 00:25:34.997 00:25:34.997 INFO: Requests: 00:25:34.997 { 00:25:34.997 "jsonrpc": "2.0", 00:25:34.997 "method": "framework_start_init", 00:25:34.997 "id": 1 00:25:34.997 } 00:25:34.997 00:25:34.997 [2024-07-15 22:49:18.357254] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:34.997 INFO: response: 00:25:34.997 { 00:25:34.997 "jsonrpc": "2.0", 00:25:34.997 "id": 1, 00:25:34.997 "result": true 00:25:34.997 } 00:25:34.997 00:25:34.997 INFO: response: 00:25:34.997 { 00:25:34.997 "jsonrpc": "2.0", 00:25:34.997 "id": 1, 00:25:34.997 "result": true 00:25:34.997 } 00:25:34.997 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.997 22:49:18 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:34.997 INFO: Setting log level to 40 00:25:34.997 INFO: Setting log level to 40 00:25:34.997 INFO: Setting log level to 40 00:25:34.997 [2024-07-15 22:49:18.367398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.997 22:49:18 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:34.997 22:49:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.997 22:49:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.287 Nvme0n1 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.287 [2024-07-15 22:49:21.261466] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.287 [ 00:25:38.287 { 00:25:38.287 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:38.287 "subtype": "Discovery", 00:25:38.287 "listen_addresses": [], 00:25:38.287 "allow_any_host": true, 00:25:38.287 "hosts": [] 00:25:38.287 }, 00:25:38.287 { 00:25:38.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.287 "subtype": "NVMe", 00:25:38.287 "listen_addresses": [ 00:25:38.287 { 00:25:38.287 "trtype": "TCP", 00:25:38.287 "adrfam": "IPv4", 00:25:38.287 "traddr": "10.0.0.2", 00:25:38.287 "trsvcid": "4420" 00:25:38.287 } 00:25:38.287 ], 00:25:38.287 "allow_any_host": true, 00:25:38.287 "hosts": [], 00:25:38.287 "serial_number": "SPDK00000000000001", 00:25:38.287 "model_number": "SPDK bdev Controller", 00:25:38.287 "max_namespaces": 1, 00:25:38.287 "min_cntlid": 1, 00:25:38.287 "max_cntlid": 65519, 00:25:38.287 "namespaces": [ 00:25:38.287 { 00:25:38.287 "nsid": 1, 00:25:38.287 "bdev_name": "Nvme0n1", 00:25:38.287 "name": "Nvme0n1", 00:25:38.287 "nguid": "C087629D46D843ADB646AAFB6406E663", 00:25:38.287 "uuid": "c087629d-46d8-43ad-b646-aafb6406e663" 00:25:38.287 } 00:25:38.287 ] 00:25:38.287 } 00:25:38.287 ] 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:38.287 22:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:38.287 rmmod nvme_tcp 00:25:38.287 rmmod nvme_fabrics 00:25:38.287 rmmod nvme_keyring 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1370652 ']' 00:25:38.287 22:49:21 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1370652 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@942 -- # '[' -z 1370652 ']' 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # kill -0 1370652 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # uname 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1370652 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1370652' 00:25:38.287 killing process with pid 1370652 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@961 -- # kill 1370652 00:25:38.287 22:49:21 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # wait 1370652 00:25:40.184 22:49:23 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:40.184 22:49:23 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:40.184 22:49:23 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:40.184 22:49:23 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.184 22:49:23 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:40.184 22:49:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.184 22:49:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:40.184 22:49:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.107 22:49:25 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.107 00:25:42.107 real 0m18.114s 00:25:42.107 user 0m26.887s 00:25:42.107 sys 0m2.285s 00:25:42.107 22:49:25 nvmf_identify_passthru -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:42.107 22:49:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:42.107 ************************************ 00:25:42.107 END TEST nvmf_identify_passthru 00:25:42.107 ************************************ 00:25:42.107 22:49:25 -- common/autotest_common.sh@1136 -- # return 0 00:25:42.107 22:49:25 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:42.107 22:49:25 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:25:42.107 22:49:25 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:42.107 22:49:25 -- common/autotest_common.sh@10 -- # set +x 00:25:42.107 ************************************ 00:25:42.107 START TEST nvmf_dif 00:25:42.107 ************************************ 00:25:42.107 22:49:25 nvmf_dif -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:42.107 * Looking for test storage... 00:25:42.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:42.107 22:49:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.107 22:49:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:42.107 22:49:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.107 22:49:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.107 22:49:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.107 22:49:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.107 22:49:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.108 22:49:25 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.108 22:49:25 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.108 22:49:25 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.108 22:49:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.108 22:49:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.108 22:49:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.108 22:49:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:42.108 22:49:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.108 22:49:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:42.108 22:49:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:42.108 22:49:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:42.108 22:49:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:42.108 22:49:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.108 22:49:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:42.108 22:49:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.108 22:49:25 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.108 22:49:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 22:49:27 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:44.035 22:49:27 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:44.035 22:49:27 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:44.035 22:49:27 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:44.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:44.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:44.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:44.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:44.036 22:49:27 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:44.295 22:49:27 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:44.295 22:49:27 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:44.295 22:49:27 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:44.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:25:44.295 00:25:44.295 --- 10.0.0.2 ping statistics --- 00:25:44.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.295 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:25:44.295 22:49:27 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:44.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:25:44.295 00:25:44.295 --- 10.0.0.1 ping statistics --- 00:25:44.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.295 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:44.295 22:49:27 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.295 22:49:27 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:44.295 22:49:27 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:44.295 22:49:27 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:45.232 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:45.232 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:45.232 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:45.232 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:45.232 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:45.232 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:45.232 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:45.232 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:45.232 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:45.232 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:45.232 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:45.232 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:45.232 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:45.232 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:45.232 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:45.232 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:45.232 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:45.491 22:49:28 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.491 22:49:28 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:45.491 22:49:28 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:45.491 22:49:28 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.491 22:49:28 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:45.491 22:49:28 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:45.491 22:49:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:45.491 22:49:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:45.491 22:49:28 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:45.491 22:49:28 nvmf_dif -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:45.491 22:49:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:45.491 22:49:28 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1373795 00:25:45.491 22:49:28 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:45.491 22:49:28 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1373795 00:25:45.491 22:49:28 nvmf_dif -- common/autotest_common.sh@823 -- # '[' -z 1373795 ']' 00:25:45.491 22:49:28 nvmf_dif -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.491 22:49:28 nvmf_dif -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:45.491 22:49:28 nvmf_dif -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.491 22:49:28 nvmf_dif -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:45.491 22:49:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:45.491 [2024-07-15 22:49:28.967020] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:25:45.491 [2024-07-15 22:49:28.967096] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.748 [2024-07-15 22:49:29.031076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.748 [2024-07-15 22:49:29.138447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.748 [2024-07-15 22:49:29.138502] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.748 [2024-07-15 22:49:29.138525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.748 [2024-07-15 22:49:29.138536] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.748 [2024-07-15 22:49:29.138545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.748 [2024-07-15 22:49:29.138578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.748 22:49:29 nvmf_dif -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:45.748 22:49:29 nvmf_dif -- common/autotest_common.sh@856 -- # return 0 00:25:45.748 22:49:29 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:45.748 22:49:29 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:45.748 22:49:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:46.006 22:49:29 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.006 22:49:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:46.006 22:49:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:46.006 22:49:29 nvmf_dif -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.006 22:49:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:46.006 [2024-07-15 22:49:29.274212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.006 22:49:29 nvmf_dif -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.006 22:49:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:46.006 22:49:29 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:25:46.006 22:49:29 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:46.006 22:49:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:46.006 ************************************ 00:25:46.006 START TEST fio_dif_1_default 00:25:46.006 ************************************ 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1117 -- # fio_dif_1 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.006 bdev_null0 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.006 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.007 [2024-07-15 22:49:29.330484] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:46.007 { 00:25:46.007 "params": { 00:25:46.007 "name": "Nvme$subsystem", 00:25:46.007 "trtype": "$TEST_TRANSPORT", 00:25:46.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.007 "adrfam": "ipv4", 00:25:46.007 "trsvcid": "$NVMF_PORT", 00:25:46.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.007 "hdgst": ${hdgst:-false}, 00:25:46.007 "ddgst": ${ddgst:-false} 00:25:46.007 }, 00:25:46.007 "method": "bdev_nvme_attach_controller" 00:25:46.007 } 00:25:46.007 EOF 00:25:46.007 )") 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local sanitizers 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # shift 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local asan_lib= 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # grep libasan 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:46.007 "params": { 00:25:46.007 "name": "Nvme0", 00:25:46.007 "trtype": "tcp", 00:25:46.007 "traddr": "10.0.0.2", 00:25:46.007 "adrfam": "ipv4", 00:25:46.007 "trsvcid": "4420", 00:25:46.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:46.007 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:46.007 "hdgst": false, 00:25:46.007 "ddgst": false 00:25:46.007 }, 00:25:46.007 "method": "bdev_nvme_attach_controller" 00:25:46.007 }' 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # asan_lib= 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # asan_lib= 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:46.007 22:49:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.267 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:46.267 fio-3.35 00:25:46.267 Starting 1 thread 00:25:58.464 00:25:58.464 filename0: (groupid=0, jobs=1): err= 0: pid=1374020: Mon Jul 15 22:49:40 2024 00:25:58.464 read: IOPS=96, BW=384KiB/s (393kB/s)(3856KiB/10036msec) 00:25:58.464 slat (nsec): min=5385, max=62650, avg=11108.66, stdev=6508.15 00:25:58.464 clat (usec): min=40909, max=44770, avg=41609.40, stdev=520.62 00:25:58.464 lat (usec): min=40916, max=44811, avg=41620.50, stdev=520.86 00:25:58.464 clat percentiles (usec): 00:25:58.464 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:58.464 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:25:58.464 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:58.464 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:25:58.464 | 99.99th=[44827] 00:25:58.464 bw ( KiB/s): min= 352, max= 416, per=99.94%, avg=384.00, stdev=14.68, samples=20 00:25:58.464 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:25:58.464 lat (msec) : 50=100.00% 00:25:58.464 cpu : usr=88.90%, sys=10.83%, ctx=19, majf=0, minf=287 00:25:58.464 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.464 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.464 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:58.464 00:25:58.464 Run status group 0 (all jobs): 00:25:58.464 READ: bw=384KiB/s (393kB/s), 384KiB/s-384KiB/s (393kB/s-393kB/s), io=3856KiB (3949kB), run=10036-10036msec 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.464 00:25:58.464 real 0m11.204s 00:25:58.464 user 0m10.084s 00:25:58.464 sys 0m1.334s 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 ************************************ 00:25:58.464 END TEST fio_dif_1_default 00:25:58.464 ************************************ 00:25:58.464 22:49:40 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:25:58.464 22:49:40 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:58.464 22:49:40 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:25:58.464 22:49:40 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 ************************************ 00:25:58.464 START TEST fio_dif_1_multi_subsystems 00:25:58.464 ************************************ 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1117 -- # fio_dif_1_multi_subsystems 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 bdev_null0 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 [2024-07-15 22:49:40.578884] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 bdev_null1 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:58.464 { 00:25:58.464 "params": { 00:25:58.464 "name": "Nvme$subsystem", 00:25:58.464 "trtype": "$TEST_TRANSPORT", 00:25:58.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.464 "adrfam": "ipv4", 00:25:58.464 "trsvcid": "$NVMF_PORT", 00:25:58.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.464 "hdgst": ${hdgst:-false}, 00:25:58.464 "ddgst": ${ddgst:-false} 00:25:58.464 }, 00:25:58.464 "method": "bdev_nvme_attach_controller" 00:25:58.464 } 00:25:58.464 EOF 00:25:58.464 )") 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local sanitizers 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # shift 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local asan_lib= 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.464 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # grep libasan 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:58.465 { 00:25:58.465 "params": { 00:25:58.465 "name": "Nvme$subsystem", 00:25:58.465 "trtype": "$TEST_TRANSPORT", 00:25:58.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.465 "adrfam": "ipv4", 00:25:58.465 "trsvcid": "$NVMF_PORT", 00:25:58.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.465 "hdgst": ${hdgst:-false}, 00:25:58.465 "ddgst": ${ddgst:-false} 00:25:58.465 }, 00:25:58.465 "method": "bdev_nvme_attach_controller" 00:25:58.465 } 00:25:58.465 EOF 00:25:58.465 )") 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:58.465 "params": { 00:25:58.465 "name": "Nvme0", 00:25:58.465 "trtype": "tcp", 00:25:58.465 "traddr": "10.0.0.2", 00:25:58.465 "adrfam": "ipv4", 00:25:58.465 "trsvcid": "4420", 00:25:58.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:58.465 "hdgst": false, 00:25:58.465 "ddgst": false 00:25:58.465 }, 00:25:58.465 "method": "bdev_nvme_attach_controller" 00:25:58.465 },{ 00:25:58.465 "params": { 00:25:58.465 "name": "Nvme1", 00:25:58.465 "trtype": "tcp", 00:25:58.465 "traddr": "10.0.0.2", 00:25:58.465 "adrfam": "ipv4", 00:25:58.465 "trsvcid": "4420", 00:25:58.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:58.465 "hdgst": false, 00:25:58.465 "ddgst": false 00:25:58.465 }, 00:25:58.465 "method": "bdev_nvme_attach_controller" 00:25:58.465 }' 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # asan_lib= 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # asan_lib= 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:58.465 22:49:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.465 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:58.465 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:58.465 fio-3.35 00:25:58.465 Starting 2 threads 00:26:08.432 00:26:08.432 filename0: (groupid=0, jobs=1): err= 0: pid=1375429: Mon Jul 15 22:49:51 2024 00:26:08.432 read: IOPS=188, BW=753KiB/s (771kB/s)(7552KiB/10031msec) 00:26:08.432 slat (nsec): min=7044, max=61838, avg=9305.53, stdev=3726.47 00:26:08.432 clat (usec): min=838, max=42582, avg=21222.67, stdev=20218.19 00:26:08.432 lat (usec): min=846, max=42626, avg=21231.98, stdev=20217.83 00:26:08.432 clat percentiles (usec): 00:26:08.432 | 1.00th=[ 865], 5.00th=[ 889], 10.00th=[ 906], 20.00th=[ 922], 00:26:08.432 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[41157], 60.00th=[41157], 00:26:08.432 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:26:08.432 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:26:08.432 | 99.99th=[42730] 00:26:08.432 bw ( KiB/s): min= 672, max= 768, per=66.21%, avg=753.60, stdev=30.22, samples=20 00:26:08.432 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:26:08.432 lat (usec) : 1000=48.62% 00:26:08.432 lat (msec) : 2=1.17%, 50=50.21% 00:26:08.432 cpu : usr=93.84%, sys=5.87%, ctx=20, majf=0, minf=193 00:26:08.432 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:08.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.432 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.432 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:08.432 filename1: (groupid=0, jobs=1): err= 0: pid=1375430: Mon Jul 15 22:49:51 2024 00:26:08.432 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10020msec) 00:26:08.432 slat (nsec): min=7048, max=35658, avg=9700.22, stdev=4094.39 00:26:08.432 clat (usec): min=40900, max=42552, avg=41545.11, stdev=496.64 00:26:08.432 lat (usec): min=40919, max=42581, avg=41554.81, stdev=496.99 00:26:08.432 clat percentiles (usec): 00:26:08.432 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:08.432 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:26:08.432 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:08.432 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:26:08.432 | 99.99th=[42730] 00:26:08.432 bw ( KiB/s): min= 384, max= 384, per=33.76%, avg=384.00, stdev= 0.00, samples=20 00:26:08.432 iops : min= 96, max= 96, avg=96.00, stdev= 0.00, samples=20 00:26:08.432 lat (msec) : 50=100.00% 00:26:08.432 cpu : usr=94.24%, sys=5.47%, ctx=16, majf=0, minf=68 00:26:08.432 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:08.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.432 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.432 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:08.432 00:26:08.432 Run status group 0 (all jobs): 00:26:08.432 READ: bw=1137KiB/s (1165kB/s), 385KiB/s-753KiB/s (394kB/s-771kB/s), io=11.1MiB (11.7MB), run=10020-10031msec 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:08.691 00:26:08.691 real 0m11.435s 00:26:08.691 user 0m20.287s 00:26:08.691 sys 0m1.423s 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:08.691 22:49:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:08.691 ************************************ 00:26:08.691 END TEST fio_dif_1_multi_subsystems 00:26:08.691 ************************************ 00:26:08.692 22:49:52 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:26:08.692 22:49:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:08.692 22:49:52 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:26:08.692 22:49:52 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:08.692 22:49:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:08.692 ************************************ 00:26:08.692 START TEST fio_dif_rand_params 00:26:08.692 ************************************ 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1117 -- # fio_dif_rand_params 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:08.692 bdev_null0 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:08.692 [2024-07-15 22:49:52.067843] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:08.692 { 00:26:08.692 "params": { 00:26:08.692 "name": "Nvme$subsystem", 00:26:08.692 "trtype": "$TEST_TRANSPORT", 00:26:08.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.692 "adrfam": "ipv4", 00:26:08.692 "trsvcid": "$NVMF_PORT", 00:26:08.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.692 "hdgst": ${hdgst:-false}, 00:26:08.692 "ddgst": ${ddgst:-false} 00:26:08.692 }, 00:26:08.692 "method": "bdev_nvme_attach_controller" 00:26:08.692 } 00:26:08.692 EOF 00:26:08.692 )") 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:08.692 "params": { 00:26:08.692 "name": "Nvme0", 00:26:08.692 "trtype": "tcp", 00:26:08.692 "traddr": "10.0.0.2", 00:26:08.692 "adrfam": "ipv4", 00:26:08.692 "trsvcid": "4420", 00:26:08.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:08.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:08.692 "hdgst": false, 00:26:08.692 "ddgst": false 00:26:08.692 }, 00:26:08.692 "method": "bdev_nvme_attach_controller" 00:26:08.692 }' 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:08.692 22:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.951 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:08.951 ... 00:26:08.951 fio-3.35 00:26:08.951 Starting 3 threads 00:26:15.507 00:26:15.507 filename0: (groupid=0, jobs=1): err= 0: pid=1376828: Mon Jul 15 22:49:57 2024 00:26:15.507 read: IOPS=137, BW=17.2MiB/s (18.0MB/s)(85.9MiB/5004msec) 00:26:15.507 slat (nsec): min=5673, max=77088, avg=11556.37, stdev=3173.42 00:26:15.507 clat (usec): min=7687, max=96068, avg=21831.55, stdev=17382.07 00:26:15.507 lat (usec): min=7699, max=96080, avg=21843.11, stdev=17381.99 00:26:15.507 clat percentiles (usec): 00:26:15.507 | 1.00th=[ 8094], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[11076], 00:26:15.507 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13698], 60.00th=[14877], 00:26:15.507 | 70.00th=[16712], 80.00th=[49546], 90.00th=[52691], 95.00th=[54789], 00:26:15.507 | 99.00th=[57934], 99.50th=[92799], 99.90th=[95945], 99.95th=[95945], 00:26:15.507 | 99.99th=[95945] 00:26:15.507 bw ( KiB/s): min=11520, max=23040, per=28.48%, avg=17513.40, stdev=4017.42, samples=10 00:26:15.507 iops : min= 90, max= 180, avg=136.80, stdev=31.40, samples=10 00:26:15.507 lat (msec) : 10=11.21%, 20=66.08%, 50=3.20%, 100=19.51% 00:26:15.507 cpu : usr=92.80%, sys=6.80%, ctx=20, majf=0, minf=153 00:26:15.507 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.507 issued rwts: total=687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:15.507 filename0: (groupid=0, jobs=1): err= 0: pid=1376829: Mon Jul 15 22:49:57 2024 00:26:15.507 read: IOPS=162, BW=20.3MiB/s (21.3MB/s)(102MiB/5044msec) 00:26:15.507 slat (nsec): min=5631, max=31778, avg=11133.25, stdev=2271.10 00:26:15.507 clat (usec): min=6360, max=93145, avg=18375.73, stdev=16340.20 00:26:15.507 lat (usec): min=6371, max=93157, avg=18386.86, stdev=16340.17 00:26:15.507 clat percentiles (usec): 00:26:15.507 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 9241], 00:26:15.507 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11731], 60.00th=[12649], 00:26:15.507 | 70.00th=[13960], 80.00th=[16319], 90.00th=[51643], 95.00th=[53216], 00:26:15.507 | 99.00th=[56361], 99.50th=[58459], 99.90th=[92799], 99.95th=[92799], 00:26:15.507 | 99.99th=[92799] 00:26:15.507 bw ( KiB/s): min=14336, max=26880, per=34.01%, avg=20915.20, stdev=3906.83, samples=10 00:26:15.507 iops : min= 112, max= 210, avg=163.40, stdev=30.52, samples=10 00:26:15.507 lat (msec) : 10=30.20%, 20=52.57%, 50=3.18%, 100=14.06% 00:26:15.507 cpu : usr=92.64%, sys=6.94%, ctx=10, majf=0, minf=69 00:26:15.507 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.507 issued rwts: total=818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:15.507 filename0: (groupid=0, jobs=1): err= 0: pid=1376830: Mon Jul 15 22:49:57 2024 00:26:15.507 read: IOPS=183, BW=22.9MiB/s (24.1MB/s)(115MiB/5002msec) 00:26:15.507 slat (nsec): min=5666, max=34553, avg=11653.14, stdev=2023.15 00:26:15.507 clat (usec): min=6919, max=93130, avg=16329.95, stdev=13653.51 00:26:15.507 lat (usec): min=6932, max=93141, avg=16341.61, stdev=13653.45 00:26:15.507 clat percentiles (usec): 00:26:15.507 | 1.00th=[ 7242], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8717], 00:26:15.507 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11600], 60.00th=[12780], 00:26:15.507 | 70.00th=[14091], 80.00th=[15926], 90.00th=[50594], 95.00th=[53216], 00:26:15.507 | 99.00th=[56361], 99.50th=[57410], 99.90th=[92799], 99.95th=[92799], 00:26:15.507 | 99.99th=[92799] 00:26:15.507 bw ( KiB/s): min=17664, max=29184, per=38.86%, avg=23893.33, stdev=3938.99, samples=9 00:26:15.507 iops : min= 138, max= 228, avg=186.67, stdev=30.77, samples=9 00:26:15.507 lat (msec) : 10=30.07%, 20=58.39%, 50=1.42%, 100=10.13% 00:26:15.507 cpu : usr=91.86%, sys=7.52%, ctx=19, majf=0, minf=155 00:26:15.507 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.507 issued rwts: total=918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:15.507 00:26:15.507 Run status group 0 (all jobs): 00:26:15.507 READ: bw=60.0MiB/s (63.0MB/s), 17.2MiB/s-22.9MiB/s (18.0MB/s-24.1MB/s), io=303MiB (318MB), run=5002-5044msec 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.507 bdev_null0 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:15.507 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 [2024-07-15 22:49:58.198455] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 bdev_null1 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 bdev_null2 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:15.508 { 00:26:15.508 "params": { 00:26:15.508 "name": "Nvme$subsystem", 00:26:15.508 "trtype": "$TEST_TRANSPORT", 00:26:15.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.508 "adrfam": "ipv4", 00:26:15.508 "trsvcid": "$NVMF_PORT", 00:26:15.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.508 "hdgst": ${hdgst:-false}, 00:26:15.508 "ddgst": ${ddgst:-false} 00:26:15.508 }, 00:26:15.508 "method": "bdev_nvme_attach_controller" 00:26:15.508 } 00:26:15.508 EOF 00:26:15.508 )") 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:15.508 { 00:26:15.508 "params": { 00:26:15.508 "name": "Nvme$subsystem", 00:26:15.508 "trtype": "$TEST_TRANSPORT", 00:26:15.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.508 "adrfam": "ipv4", 00:26:15.508 "trsvcid": "$NVMF_PORT", 00:26:15.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.508 "hdgst": ${hdgst:-false}, 00:26:15.508 "ddgst": ${ddgst:-false} 00:26:15.508 }, 00:26:15.508 "method": "bdev_nvme_attach_controller" 00:26:15.508 } 00:26:15.508 EOF 00:26:15.508 )") 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:15.508 { 00:26:15.508 "params": { 00:26:15.508 "name": "Nvme$subsystem", 00:26:15.508 "trtype": "$TEST_TRANSPORT", 00:26:15.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.508 "adrfam": "ipv4", 00:26:15.508 "trsvcid": "$NVMF_PORT", 00:26:15.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.508 "hdgst": ${hdgst:-false}, 00:26:15.508 "ddgst": ${ddgst:-false} 00:26:15.508 }, 00:26:15.508 "method": "bdev_nvme_attach_controller" 00:26:15.508 } 00:26:15.508 EOF 00:26:15.508 )") 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:15.508 "params": { 00:26:15.508 "name": "Nvme0", 00:26:15.508 "trtype": "tcp", 00:26:15.508 "traddr": "10.0.0.2", 00:26:15.508 "adrfam": "ipv4", 00:26:15.508 "trsvcid": "4420", 00:26:15.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:15.508 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:15.508 "hdgst": false, 00:26:15.508 "ddgst": false 00:26:15.508 }, 00:26:15.508 "method": "bdev_nvme_attach_controller" 00:26:15.508 },{ 00:26:15.508 "params": { 00:26:15.508 "name": "Nvme1", 00:26:15.508 "trtype": "tcp", 00:26:15.508 "traddr": "10.0.0.2", 00:26:15.508 "adrfam": "ipv4", 00:26:15.508 "trsvcid": "4420", 00:26:15.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:15.508 "hdgst": false, 00:26:15.508 "ddgst": false 00:26:15.508 }, 00:26:15.508 "method": "bdev_nvme_attach_controller" 00:26:15.508 },{ 00:26:15.508 "params": { 00:26:15.508 "name": "Nvme2", 00:26:15.508 "trtype": "tcp", 00:26:15.508 "traddr": "10.0.0.2", 00:26:15.508 "adrfam": "ipv4", 00:26:15.508 "trsvcid": "4420", 00:26:15.508 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:15.508 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:15.508 "hdgst": false, 00:26:15.508 "ddgst": false 00:26:15.508 }, 00:26:15.508 "method": "bdev_nvme_attach_controller" 00:26:15.508 }' 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:26:15.508 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.509 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:15.509 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:26:15.509 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:26:15.509 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:26:15.509 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:26:15.509 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:15.509 22:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.509 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:15.509 ... 00:26:15.509 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:15.509 ... 00:26:15.509 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:15.509 ... 00:26:15.509 fio-3.35 00:26:15.509 Starting 24 threads 00:26:27.715 00:26:27.715 filename0: (groupid=0, jobs=1): err= 0: pid=1377688: Mon Jul 15 22:50:09 2024 00:26:27.715 read: IOPS=447, BW=1791KiB/s (1834kB/s)(17.5MiB/10005msec) 00:26:27.715 slat (usec): min=8, max=111, avg=41.63, stdev=16.50 00:26:27.715 clat (msec): min=7, max=120, avg=35.33, stdev= 4.80 00:26:27.715 lat (msec): min=7, max=120, avg=35.37, stdev= 4.80 00:26:27.715 clat percentiles (msec): 00:26:27.715 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:26:27.715 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:27.715 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 43], 95.00th=[ 44], 00:26:27.715 | 99.00th=[ 45], 99.50th=[ 46], 99.90th=[ 87], 99.95th=[ 87], 00:26:27.715 | 99.99th=[ 121] 00:26:27.715 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1785.26, stdev=167.84, samples=19 00:26:27.715 iops : min= 352, max= 480, avg=446.32, stdev=41.96, samples=19 00:26:27.715 lat (msec) : 10=0.04%, 20=0.04%, 50=99.55%, 100=0.31%, 250=0.04% 00:26:27.715 cpu : usr=96.01%, sys=2.44%, ctx=51, majf=0, minf=26 00:26:27.715 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:27.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.716 filename0: (groupid=0, jobs=1): err= 0: pid=1377689: Mon Jul 15 22:50:09 2024 00:26:27.716 read: IOPS=450, BW=1802KiB/s (1845kB/s)(17.6MiB/10017msec) 00:26:27.716 slat (usec): min=6, max=206, avg=30.23, stdev=24.91 00:26:27.716 clat (usec): min=17934, max=53466, avg=35238.26, stdev=3562.12 00:26:27.716 lat (usec): min=17942, max=53555, avg=35268.48, stdev=3562.44 00:26:27.716 clat percentiles (usec): 00:26:27.716 | 1.00th=[31589], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:26:27.716 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:26:27.716 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.716 | 99.00th=[44303], 99.50th=[45351], 99.90th=[47973], 99.95th=[51119], 00:26:27.716 | 99.99th=[53216] 00:26:27.716 bw ( KiB/s): min= 1424, max= 1920, per=4.18%, avg=1798.40, stdev=150.58, samples=20 00:26:27.716 iops : min= 356, max= 480, avg=449.60, stdev=37.64, samples=20 00:26:27.716 lat (msec) : 20=0.40%, 50=99.51%, 100=0.09% 00:26:27.716 cpu : usr=95.64%, sys=2.52%, ctx=72, majf=0, minf=30 00:26:27.716 IO depths : 1=5.0%, 2=11.2%, 4=24.9%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:26:27.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.716 filename0: (groupid=0, jobs=1): err= 0: pid=1377690: Mon Jul 15 22:50:09 2024 00:26:27.716 read: IOPS=448, BW=1795KiB/s (1838kB/s)(17.6MiB/10018msec) 00:26:27.716 slat (usec): min=8, max=199, avg=39.76, stdev=21.00 00:26:27.716 clat (usec): min=20033, max=61840, avg=35275.70, stdev=3812.19 00:26:27.716 lat (usec): min=20044, max=61857, avg=35315.46, stdev=3811.48 00:26:27.716 clat percentiles (usec): 00:26:27.716 | 1.00th=[30802], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:26:27.716 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.716 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.716 | 99.00th=[44827], 99.50th=[45876], 99.90th=[61604], 99.95th=[61604], 00:26:27.716 | 99.99th=[61604] 00:26:27.716 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1791.80, stdev=166.12, samples=20 00:26:27.716 iops : min= 352, max= 480, avg=447.95, stdev=41.53, samples=20 00:26:27.716 lat (msec) : 50=99.56%, 100=0.44% 00:26:27.716 cpu : usr=98.06%, sys=1.50%, ctx=16, majf=0, minf=17 00:26:27.716 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:27.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.716 filename0: (groupid=0, jobs=1): err= 0: pid=1377691: Mon Jul 15 22:50:09 2024 00:26:27.716 read: IOPS=450, BW=1800KiB/s (1843kB/s)(17.6MiB/10026msec) 00:26:27.716 slat (usec): min=8, max=103, avg=33.16, stdev=14.40 00:26:27.716 clat (usec): min=25184, max=47993, avg=35288.88, stdev=3423.41 00:26:27.716 lat (usec): min=25239, max=48030, avg=35322.04, stdev=3422.37 00:26:27.716 clat percentiles (usec): 00:26:27.716 | 1.00th=[30016], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:26:27.716 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.716 | 70.00th=[34341], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:26:27.716 | 99.00th=[44303], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:26:27.716 | 99.99th=[47973] 00:26:27.716 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1795.95, stdev=156.39, samples=20 00:26:27.716 iops : min= 352, max= 480, avg=448.95, stdev=39.08, samples=20 00:26:27.716 lat (msec) : 50=100.00% 00:26:27.716 cpu : usr=98.00%, sys=1.52%, ctx=91, majf=0, minf=20 00:26:27.716 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:27.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.716 filename0: (groupid=0, jobs=1): err= 0: pid=1377692: Mon Jul 15 22:50:09 2024 00:26:27.716 read: IOPS=450, BW=1803KiB/s (1846kB/s)(17.6MiB/10010msec) 00:26:27.716 slat (usec): min=8, max=342, avg=40.69, stdev=22.40 00:26:27.716 clat (usec): min=18559, max=46025, avg=35108.44, stdev=3436.26 00:26:27.716 lat (usec): min=18575, max=46061, avg=35149.13, stdev=3434.50 00:26:27.716 clat percentiles (usec): 00:26:27.716 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:26:27.716 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.716 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.716 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:26:27.716 | 99.99th=[45876] 00:26:27.716 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1798.40, stdev=152.44, samples=20 00:26:27.716 iops : min= 352, max= 480, avg=449.60, stdev=38.11, samples=20 00:26:27.716 lat (msec) : 20=0.35%, 50=99.65% 00:26:27.716 cpu : usr=97.82%, sys=1.61%, ctx=18, majf=0, minf=22 00:26:27.716 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:27.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.716 filename0: (groupid=0, jobs=1): err= 0: pid=1377693: Mon Jul 15 22:50:09 2024 00:26:27.716 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10007msec) 00:26:27.716 slat (usec): min=8, max=145, avg=37.85, stdev=19.91 00:26:27.716 clat (usec): min=20003, max=88651, avg=35403.40, stdev=4684.42 00:26:27.716 lat (usec): min=20025, max=88673, avg=35441.25, stdev=4684.73 00:26:27.716 clat percentiles (usec): 00:26:27.716 | 1.00th=[31851], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:26:27.716 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.716 | 70.00th=[34341], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:26:27.716 | 99.00th=[45351], 99.50th=[50070], 99.90th=[86508], 99.95th=[88605], 00:26:27.716 | 99.99th=[88605] 00:26:27.716 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1778.68, stdev=179.95, samples=19 00:26:27.716 iops : min= 352, max= 480, avg=444.63, stdev=44.98, samples=19 00:26:27.716 lat (msec) : 50=99.51%, 100=0.49% 00:26:27.716 cpu : usr=98.22%, sys=1.37%, ctx=15, majf=0, minf=16 00:26:27.716 IO depths : 1=4.8%, 2=10.9%, 4=24.6%, 8=52.0%, 16=7.8%, 32=0.0%, >=64=0.0% 00:26:27.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 issued rwts: total=4478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.716 filename0: (groupid=0, jobs=1): err= 0: pid=1377694: Mon Jul 15 22:50:09 2024 00:26:27.716 read: IOPS=448, BW=1795KiB/s (1838kB/s)(17.6MiB/10017msec) 00:26:27.716 slat (usec): min=8, max=125, avg=41.97, stdev=17.18 00:26:27.716 clat (usec): min=23882, max=61795, avg=35293.42, stdev=3746.08 00:26:27.716 lat (usec): min=23932, max=61811, avg=35335.39, stdev=3743.73 00:26:27.716 clat percentiles (usec): 00:26:27.716 | 1.00th=[29492], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:26:27.716 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.716 | 70.00th=[34341], 80.00th=[36439], 90.00th=[42730], 95.00th=[43254], 00:26:27.716 | 99.00th=[44303], 99.50th=[45351], 99.90th=[61604], 99.95th=[61604], 00:26:27.716 | 99.99th=[61604] 00:26:27.716 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1791.80, stdev=165.41, samples=20 00:26:27.716 iops : min= 352, max= 480, avg=447.95, stdev=41.35, samples=20 00:26:27.716 lat (msec) : 50=99.64%, 100=0.36% 00:26:27.716 cpu : usr=98.17%, sys=1.42%, ctx=16, majf=0, minf=17 00:26:27.716 IO depths : 1=4.1%, 2=10.2%, 4=24.4%, 8=52.9%, 16=8.4%, 32=0.0%, >=64=0.0% 00:26:27.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.716 filename0: (groupid=0, jobs=1): err= 0: pid=1377695: Mon Jul 15 22:50:09 2024 00:26:27.716 read: IOPS=447, BW=1791KiB/s (1834kB/s)(17.5MiB/10007msec) 00:26:27.716 slat (usec): min=8, max=101, avg=39.53, stdev=14.42 00:26:27.716 clat (usec): min=18772, max=89208, avg=35396.09, stdev=4800.03 00:26:27.716 lat (usec): min=18786, max=89241, avg=35435.62, stdev=4799.36 00:26:27.716 clat percentiles (usec): 00:26:27.716 | 1.00th=[29492], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:26:27.716 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.716 | 70.00th=[34341], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:26:27.716 | 99.00th=[45351], 99.50th=[49021], 99.90th=[88605], 99.95th=[89654], 00:26:27.716 | 99.99th=[89654] 00:26:27.716 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1778.32, stdev=180.53, samples=19 00:26:27.716 iops : min= 352, max= 480, avg=444.58, stdev=45.13, samples=19 00:26:27.716 lat (msec) : 20=0.22%, 50=99.33%, 100=0.45% 00:26:27.716 cpu : usr=93.62%, sys=3.41%, ctx=267, majf=0, minf=14 00:26:27.716 IO depths : 1=4.2%, 2=10.3%, 4=24.6%, 8=52.5%, 16=8.3%, 32=0.0%, >=64=0.0% 00:26:27.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.716 filename1: (groupid=0, jobs=1): err= 0: pid=1377696: Mon Jul 15 22:50:09 2024 00:26:27.716 read: IOPS=447, BW=1789KiB/s (1832kB/s)(17.5MiB/10017msec) 00:26:27.716 slat (usec): min=8, max=107, avg=31.99, stdev=19.99 00:26:27.716 clat (usec): min=21879, max=62371, avg=35522.41, stdev=4257.33 00:26:27.716 lat (usec): min=21903, max=62432, avg=35554.40, stdev=4257.37 00:26:27.716 clat percentiles (usec): 00:26:27.716 | 1.00th=[26608], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:26:27.716 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:26:27.716 | 70.00th=[34341], 80.00th=[36963], 90.00th=[42730], 95.00th=[43254], 00:26:27.716 | 99.00th=[46400], 99.50th=[58983], 99.90th=[61604], 99.95th=[61604], 00:26:27.716 | 99.99th=[62129] 00:26:27.716 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1785.40, stdev=160.30, samples=20 00:26:27.716 iops : min= 352, max= 480, avg=446.35, stdev=40.08, samples=20 00:26:27.716 lat (msec) : 50=99.29%, 100=0.71% 00:26:27.716 cpu : usr=97.60%, sys=1.78%, ctx=75, majf=0, minf=27 00:26:27.716 IO depths : 1=1.0%, 2=7.1%, 4=24.4%, 8=56.0%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:27.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 complete : 0=0.0%, 4=94.3%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.716 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.717 filename1: (groupid=0, jobs=1): err= 0: pid=1377697: Mon Jul 15 22:50:09 2024 00:26:27.717 read: IOPS=447, BW=1789KiB/s (1832kB/s)(17.5MiB/10014msec) 00:26:27.717 slat (usec): min=6, max=110, avg=41.23, stdev=15.13 00:26:27.717 clat (usec): min=26044, max=95381, avg=35407.46, stdev=4729.50 00:26:27.717 lat (usec): min=26077, max=95396, avg=35448.68, stdev=4727.92 00:26:27.717 clat percentiles (usec): 00:26:27.717 | 1.00th=[32113], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:26:27.717 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.717 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.717 | 99.00th=[44827], 99.50th=[45876], 99.90th=[89654], 99.95th=[94897], 00:26:27.717 | 99.99th=[94897] 00:26:27.717 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1778.32, stdev=180.63, samples=19 00:26:27.717 iops : min= 352, max= 480, avg=444.58, stdev=45.16, samples=19 00:26:27.717 lat (msec) : 50=99.60%, 100=0.40% 00:26:27.717 cpu : usr=98.01%, sys=1.60%, ctx=37, majf=0, minf=19 00:26:27.717 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:27.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.717 filename1: (groupid=0, jobs=1): err= 0: pid=1377698: Mon Jul 15 22:50:09 2024 00:26:27.717 read: IOPS=451, BW=1807KiB/s (1851kB/s)(17.7MiB/10017msec) 00:26:27.717 slat (nsec): min=5037, max=99474, avg=17409.74, stdev=10403.97 00:26:27.717 clat (usec): min=17785, max=53351, avg=35266.94, stdev=4666.50 00:26:27.717 lat (usec): min=17792, max=53371, avg=35284.35, stdev=4666.04 00:26:27.717 clat percentiles (usec): 00:26:27.717 | 1.00th=[19792], 5.00th=[32113], 10.00th=[33424], 20.00th=[33817], 00:26:27.717 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:26:27.717 | 70.00th=[34341], 80.00th=[37487], 90.00th=[43254], 95.00th=[43254], 00:26:27.717 | 99.00th=[47973], 99.50th=[48497], 99.90th=[51643], 99.95th=[51643], 00:26:27.717 | 99.99th=[53216] 00:26:27.717 bw ( KiB/s): min= 1408, max= 1976, per=4.20%, avg=1804.80, stdev=154.22, samples=20 00:26:27.717 iops : min= 352, max= 494, avg=451.20, stdev=38.55, samples=20 00:26:27.717 lat (msec) : 20=1.04%, 50=98.72%, 100=0.24% 00:26:27.717 cpu : usr=97.55%, sys=1.71%, ctx=128, majf=0, minf=36 00:26:27.717 IO depths : 1=2.6%, 2=8.0%, 4=22.0%, 8=57.5%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:27.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 issued rwts: total=4526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.717 filename1: (groupid=0, jobs=1): err= 0: pid=1377699: Mon Jul 15 22:50:09 2024 00:26:27.717 read: IOPS=448, BW=1795KiB/s (1838kB/s)(17.6MiB/10017msec) 00:26:27.717 slat (usec): min=8, max=202, avg=40.65, stdev=21.42 00:26:27.717 clat (usec): min=26758, max=61565, avg=35291.12, stdev=3683.39 00:26:27.717 lat (usec): min=26767, max=61588, avg=35331.77, stdev=3680.03 00:26:27.717 clat percentiles (usec): 00:26:27.717 | 1.00th=[31851], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:26:27.717 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.717 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.717 | 99.00th=[44303], 99.50th=[45876], 99.90th=[61604], 99.95th=[61604], 00:26:27.717 | 99.99th=[61604] 00:26:27.717 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1791.80, stdev=165.95, samples=20 00:26:27.717 iops : min= 352, max= 480, avg=447.95, stdev=41.49, samples=20 00:26:27.717 lat (msec) : 50=99.64%, 100=0.36% 00:26:27.717 cpu : usr=98.03%, sys=1.57%, ctx=20, majf=0, minf=18 00:26:27.717 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:27.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.717 filename1: (groupid=0, jobs=1): err= 0: pid=1377700: Mon Jul 15 22:50:09 2024 00:26:27.717 read: IOPS=448, BW=1795KiB/s (1838kB/s)(17.6MiB/10018msec) 00:26:27.717 slat (usec): min=8, max=183, avg=38.58, stdev=20.95 00:26:27.717 clat (usec): min=26230, max=61748, avg=35281.15, stdev=3678.71 00:26:27.717 lat (usec): min=26258, max=61765, avg=35319.73, stdev=3676.82 00:26:27.717 clat percentiles (usec): 00:26:27.717 | 1.00th=[32113], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:26:27.717 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.717 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.717 | 99.00th=[44827], 99.50th=[45876], 99.90th=[61604], 99.95th=[61604], 00:26:27.717 | 99.99th=[61604] 00:26:27.717 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1791.80, stdev=166.12, samples=20 00:26:27.717 iops : min= 352, max= 480, avg=447.95, stdev=41.53, samples=20 00:26:27.717 lat (msec) : 50=99.64%, 100=0.36% 00:26:27.717 cpu : usr=97.78%, sys=1.71%, ctx=16, majf=0, minf=15 00:26:27.717 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:27.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.717 filename1: (groupid=0, jobs=1): err= 0: pid=1377701: Mon Jul 15 22:50:09 2024 00:26:27.717 read: IOPS=447, BW=1789KiB/s (1832kB/s)(17.5MiB/10009msec) 00:26:27.717 slat (usec): min=7, max=131, avg=38.99, stdev=18.70 00:26:27.717 clat (usec): min=12817, max=88977, avg=35463.91, stdev=4943.16 00:26:27.717 lat (usec): min=12827, max=88993, avg=35502.90, stdev=4939.86 00:26:27.717 clat percentiles (usec): 00:26:27.717 | 1.00th=[31851], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:26:27.717 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.717 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.717 | 99.00th=[47973], 99.50th=[58459], 99.90th=[88605], 99.95th=[88605], 00:26:27.717 | 99.99th=[88605] 00:26:27.717 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1782.53, stdev=183.01, samples=19 00:26:27.717 iops : min= 352, max= 480, avg=445.63, stdev=45.75, samples=19 00:26:27.717 lat (msec) : 20=0.04%, 50=99.02%, 100=0.94% 00:26:27.717 cpu : usr=98.04%, sys=1.56%, ctx=14, majf=0, minf=31 00:26:27.717 IO depths : 1=1.0%, 2=7.0%, 4=24.2%, 8=56.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:27.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 complete : 0=0.0%, 4=94.2%, 8=0.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 issued rwts: total=4476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.717 filename1: (groupid=0, jobs=1): err= 0: pid=1377702: Mon Jul 15 22:50:09 2024 00:26:27.717 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.6MiB/10005msec) 00:26:27.717 slat (usec): min=4, max=206, avg=41.59, stdev=22.45 00:26:27.717 clat (usec): min=14574, max=57476, avg=35095.59, stdev=3555.68 00:26:27.717 lat (usec): min=14581, max=57511, avg=35137.17, stdev=3554.55 00:26:27.717 clat percentiles (usec): 00:26:27.717 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:26:27.717 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.717 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.717 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45876], 99.95th=[47449], 00:26:27.717 | 99.99th=[57410] 00:26:27.717 bw ( KiB/s): min= 1536, max= 1920, per=4.18%, avg=1798.68, stdev=150.70, samples=19 00:26:27.717 iops : min= 384, max= 480, avg=449.63, stdev=37.68, samples=19 00:26:27.717 lat (msec) : 20=0.35%, 50=99.60%, 100=0.04% 00:26:27.717 cpu : usr=98.14%, sys=1.42%, ctx=15, majf=0, minf=17 00:26:27.717 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:27.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.717 filename1: (groupid=0, jobs=1): err= 0: pid=1377703: Mon Jul 15 22:50:09 2024 00:26:27.717 read: IOPS=448, BW=1793KiB/s (1836kB/s)(17.5MiB/10006msec) 00:26:27.717 slat (usec): min=10, max=165, avg=45.69, stdev=17.60 00:26:27.717 clat (usec): min=6352, max=85741, avg=35261.67, stdev=4637.55 00:26:27.717 lat (usec): min=6371, max=85779, avg=35307.36, stdev=4636.16 00:26:27.717 clat percentiles (usec): 00:26:27.717 | 1.00th=[30016], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:26:27.717 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.717 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.717 | 99.00th=[44303], 99.50th=[45351], 99.90th=[85459], 99.95th=[85459], 00:26:27.717 | 99.99th=[85459] 00:26:27.717 bw ( KiB/s): min= 1408, max= 1923, per=4.15%, avg=1785.53, stdev=183.28, samples=19 00:26:27.717 iops : min= 352, max= 480, avg=446.32, stdev=45.85, samples=19 00:26:27.717 lat (msec) : 10=0.09%, 20=0.07%, 50=99.49%, 100=0.36% 00:26:27.717 cpu : usr=88.96%, sys=5.01%, ctx=260, majf=0, minf=27 00:26:27.717 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:27.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 issued rwts: total=4486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.717 filename2: (groupid=0, jobs=1): err= 0: pid=1377704: Mon Jul 15 22:50:09 2024 00:26:27.717 read: IOPS=447, BW=1791KiB/s (1834kB/s)(17.5MiB/10007msec) 00:26:27.717 slat (usec): min=11, max=116, avg=41.94, stdev=13.69 00:26:27.717 clat (usec): min=26092, max=88314, avg=35354.49, stdev=4569.99 00:26:27.717 lat (usec): min=26114, max=88339, avg=35396.44, stdev=4568.73 00:26:27.717 clat percentiles (usec): 00:26:27.717 | 1.00th=[32113], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:26:27.717 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.717 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.717 | 99.00th=[44827], 99.50th=[45876], 99.90th=[88605], 99.95th=[88605], 00:26:27.717 | 99.99th=[88605] 00:26:27.717 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1778.42, stdev=180.25, samples=19 00:26:27.717 iops : min= 352, max= 480, avg=444.58, stdev=45.12, samples=19 00:26:27.717 lat (msec) : 50=99.64%, 100=0.36% 00:26:27.717 cpu : usr=94.38%, sys=2.97%, ctx=104, majf=0, minf=19 00:26:27.717 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:27.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.717 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.717 filename2: (groupid=0, jobs=1): err= 0: pid=1377705: Mon Jul 15 22:50:09 2024 00:26:27.717 read: IOPS=449, BW=1800KiB/s (1843kB/s)(17.6MiB/10018msec) 00:26:27.718 slat (usec): min=8, max=149, avg=46.26, stdev=19.65 00:26:27.718 clat (usec): min=25635, max=61824, avg=35156.20, stdev=3805.52 00:26:27.718 lat (usec): min=25645, max=61842, avg=35202.46, stdev=3802.58 00:26:27.718 clat percentiles (usec): 00:26:27.718 | 1.00th=[31851], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:26:27.718 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.718 | 70.00th=[34341], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:26:27.718 | 99.00th=[45351], 99.50th=[56886], 99.90th=[61604], 99.95th=[61604], 00:26:27.718 | 99.99th=[61604] 00:26:27.718 bw ( KiB/s): min= 1456, max= 1920, per=4.18%, avg=1796.60, stdev=158.66, samples=20 00:26:27.718 iops : min= 364, max= 480, avg=449.15, stdev=39.67, samples=20 00:26:27.718 lat (msec) : 50=99.27%, 100=0.73% 00:26:27.718 cpu : usr=98.09%, sys=1.48%, ctx=15, majf=0, minf=22 00:26:27.718 IO depths : 1=6.0%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:27.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 issued rwts: total=4508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.718 filename2: (groupid=0, jobs=1): err= 0: pid=1377706: Mon Jul 15 22:50:09 2024 00:26:27.718 read: IOPS=451, BW=1805KiB/s (1849kB/s)(17.7MiB/10033msec) 00:26:27.718 slat (usec): min=4, max=108, avg=35.41, stdev=15.22 00:26:27.718 clat (usec): min=15621, max=71446, avg=35155.42, stdev=3785.50 00:26:27.718 lat (usec): min=15626, max=71463, avg=35190.83, stdev=3784.40 00:26:27.718 clat percentiles (usec): 00:26:27.718 | 1.00th=[27395], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:26:27.718 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.718 | 70.00th=[34341], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:26:27.718 | 99.00th=[43779], 99.50th=[45351], 99.90th=[45876], 99.95th=[71828], 00:26:27.718 | 99.99th=[71828] 00:26:27.718 bw ( KiB/s): min= 1536, max= 1920, per=4.20%, avg=1805.40, stdev=148.97, samples=20 00:26:27.718 iops : min= 384, max= 480, avg=451.35, stdev=37.24, samples=20 00:26:27.718 lat (msec) : 20=0.35%, 50=99.56%, 100=0.09% 00:26:27.718 cpu : usr=93.76%, sys=3.27%, ctx=143, majf=0, minf=23 00:26:27.718 IO depths : 1=5.9%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:27.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.718 filename2: (groupid=0, jobs=1): err= 0: pid=1377707: Mon Jul 15 22:50:09 2024 00:26:27.718 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10009msec) 00:26:27.718 slat (usec): min=8, max=142, avg=38.96, stdev=21.47 00:26:27.718 clat (usec): min=13570, max=89160, avg=35423.23, stdev=4664.17 00:26:27.718 lat (usec): min=13581, max=89210, avg=35462.19, stdev=4664.64 00:26:27.718 clat percentiles (usec): 00:26:27.718 | 1.00th=[31851], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:26:27.718 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.718 | 70.00th=[34341], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:26:27.718 | 99.00th=[45351], 99.50th=[46400], 99.90th=[88605], 99.95th=[88605], 00:26:27.718 | 99.99th=[89654] 00:26:27.718 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1778.32, stdev=178.79, samples=19 00:26:27.718 iops : min= 352, max= 480, avg=444.58, stdev=44.70, samples=19 00:26:27.718 lat (msec) : 20=0.04%, 50=99.51%, 100=0.45% 00:26:27.718 cpu : usr=98.08%, sys=1.52%, ctx=15, majf=0, minf=21 00:26:27.718 IO depths : 1=1.5%, 2=7.7%, 4=25.0%, 8=54.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:27.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.718 filename2: (groupid=0, jobs=1): err= 0: pid=1377708: Mon Jul 15 22:50:09 2024 00:26:27.718 read: IOPS=446, BW=1787KiB/s (1830kB/s)(17.5MiB/10004msec) 00:26:27.718 slat (usec): min=7, max=126, avg=39.42, stdev=18.49 00:26:27.718 clat (msec): min=6, max=119, avg=35.54, stdev= 5.60 00:26:27.718 lat (msec): min=6, max=120, avg=35.58, stdev= 5.59 00:26:27.718 clat percentiles (msec): 00:26:27.718 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:26:27.718 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:26:27.718 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 43], 95.00th=[ 44], 00:26:27.718 | 99.00th=[ 47], 99.50th=[ 61], 99.90th=[ 87], 99.95th=[ 121], 00:26:27.718 | 99.99th=[ 121] 00:26:27.718 bw ( KiB/s): min= 1424, max= 1920, per=4.14%, avg=1780.21, stdev=161.84, samples=19 00:26:27.718 iops : min= 356, max= 480, avg=445.05, stdev=40.46, samples=19 00:26:27.718 lat (msec) : 10=0.13%, 20=0.34%, 50=98.59%, 100=0.85%, 250=0.09% 00:26:27.718 cpu : usr=98.01%, sys=1.59%, ctx=19, majf=0, minf=25 00:26:27.718 IO depths : 1=0.1%, 2=5.1%, 4=20.6%, 8=60.9%, 16=13.4%, 32=0.0%, >=64=0.0% 00:26:27.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 complete : 0=0.0%, 4=93.5%, 8=1.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 issued rwts: total=4470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.718 filename2: (groupid=0, jobs=1): err= 0: pid=1377709: Mon Jul 15 22:50:09 2024 00:26:27.718 read: IOPS=451, BW=1808KiB/s (1851kB/s)(17.7MiB/10018msec) 00:26:27.718 slat (usec): min=7, max=112, avg=18.68, stdev=12.09 00:26:27.718 clat (usec): min=18724, max=64434, avg=35240.21, stdev=4339.93 00:26:27.718 lat (usec): min=18732, max=64449, avg=35258.89, stdev=4338.67 00:26:27.718 clat percentiles (usec): 00:26:27.718 | 1.00th=[19792], 5.00th=[32375], 10.00th=[33424], 20.00th=[33817], 00:26:27.718 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:26:27.718 | 70.00th=[34341], 80.00th=[36963], 90.00th=[43254], 95.00th=[43254], 00:26:27.718 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:26:27.718 | 99.99th=[64226] 00:26:27.718 bw ( KiB/s): min= 1408, max= 1920, per=4.20%, avg=1804.80, stdev=154.22, samples=20 00:26:27.718 iops : min= 352, max= 480, avg=451.20, stdev=38.55, samples=20 00:26:27.718 lat (msec) : 20=1.19%, 50=98.76%, 100=0.04% 00:26:27.718 cpu : usr=96.68%, sys=2.08%, ctx=117, majf=0, minf=28 00:26:27.718 IO depths : 1=4.6%, 2=10.4%, 4=23.5%, 8=53.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:26:27.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.718 filename2: (groupid=0, jobs=1): err= 0: pid=1377710: Mon Jul 15 22:50:09 2024 00:26:27.718 read: IOPS=438, BW=1754KiB/s (1796kB/s)(17.1MiB/10014msec) 00:26:27.718 slat (usec): min=8, max=109, avg=33.38, stdev=16.23 00:26:27.718 clat (usec): min=30848, max=67684, avg=36222.50, stdev=4969.59 00:26:27.718 lat (usec): min=30857, max=67760, avg=36255.88, stdev=4976.02 00:26:27.718 clat percentiles (usec): 00:26:27.718 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:26:27.718 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:26:27.718 | 70.00th=[34341], 80.00th=[38536], 90.00th=[42730], 95.00th=[44303], 00:26:27.718 | 99.00th=[58983], 99.50th=[63177], 99.90th=[67634], 99.95th=[67634], 00:26:27.718 | 99.99th=[67634] 00:26:27.718 bw ( KiB/s): min= 1280, max= 1920, per=4.07%, avg=1749.40, stdev=200.62, samples=20 00:26:27.718 iops : min= 320, max= 480, avg=437.35, stdev=50.16, samples=20 00:26:27.718 lat (msec) : 50=97.59%, 100=2.41% 00:26:27.718 cpu : usr=97.46%, sys=1.97%, ctx=26, majf=0, minf=24 00:26:27.718 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:27.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 issued rwts: total=4390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.718 filename2: (groupid=0, jobs=1): err= 0: pid=1377711: Mon Jul 15 22:50:09 2024 00:26:27.718 read: IOPS=449, BW=1797KiB/s (1840kB/s)(17.6MiB/10005msec) 00:26:27.718 slat (nsec): min=8287, max=97432, avg=40516.56, stdev=13325.86 00:26:27.718 clat (usec): min=6317, max=84269, avg=35241.25, stdev=4697.23 00:26:27.718 lat (usec): min=6325, max=84301, avg=35281.77, stdev=4695.97 00:26:27.718 clat percentiles (usec): 00:26:27.718 | 1.00th=[31851], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:26:27.718 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:26:27.718 | 70.00th=[34341], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:26:27.718 | 99.00th=[44303], 99.50th=[45351], 99.90th=[84411], 99.95th=[84411], 00:26:27.718 | 99.99th=[84411] 00:26:27.718 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1785.26, stdev=167.84, samples=19 00:26:27.718 iops : min= 352, max= 480, avg=446.32, stdev=41.96, samples=19 00:26:27.718 lat (msec) : 10=0.31%, 50=99.33%, 100=0.36% 00:26:27.718 cpu : usr=97.00%, sys=1.98%, ctx=232, majf=0, minf=22 00:26:27.718 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:27.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.718 issued rwts: total=4494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.718 00:26:27.718 Run status group 0 (all jobs): 00:26:27.718 READ: bw=42.0MiB/s (44.0MB/s), 1754KiB/s-1808KiB/s (1796kB/s-1851kB/s), io=421MiB (442MB), run=10004-10033msec 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:27.718 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 bdev_null0 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 [2024-07-15 22:50:09.786335] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 bdev_null1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.719 { 00:26:27.719 "params": { 00:26:27.719 "name": "Nvme$subsystem", 00:26:27.719 "trtype": "$TEST_TRANSPORT", 00:26:27.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.719 "adrfam": "ipv4", 00:26:27.719 "trsvcid": "$NVMF_PORT", 00:26:27.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.719 "hdgst": ${hdgst:-false}, 00:26:27.719 "ddgst": ${ddgst:-false} 00:26:27.719 }, 00:26:27.719 "method": "bdev_nvme_attach_controller" 00:26:27.719 } 00:26:27.719 EOF 00:26:27.719 )") 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:26:27.719 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.720 { 00:26:27.720 "params": { 00:26:27.720 "name": "Nvme$subsystem", 00:26:27.720 "trtype": "$TEST_TRANSPORT", 00:26:27.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.720 "adrfam": "ipv4", 00:26:27.720 "trsvcid": "$NVMF_PORT", 00:26:27.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.720 "hdgst": ${hdgst:-false}, 00:26:27.720 "ddgst": ${ddgst:-false} 00:26:27.720 }, 00:26:27.720 "method": "bdev_nvme_attach_controller" 00:26:27.720 } 00:26:27.720 EOF 00:26:27.720 )") 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:27.720 "params": { 00:26:27.720 "name": "Nvme0", 00:26:27.720 "trtype": "tcp", 00:26:27.720 "traddr": "10.0.0.2", 00:26:27.720 "adrfam": "ipv4", 00:26:27.720 "trsvcid": "4420", 00:26:27.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:27.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:27.720 "hdgst": false, 00:26:27.720 "ddgst": false 00:26:27.720 }, 00:26:27.720 "method": "bdev_nvme_attach_controller" 00:26:27.720 },{ 00:26:27.720 "params": { 00:26:27.720 "name": "Nvme1", 00:26:27.720 "trtype": "tcp", 00:26:27.720 "traddr": "10.0.0.2", 00:26:27.720 "adrfam": "ipv4", 00:26:27.720 "trsvcid": "4420", 00:26:27.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.720 "hdgst": false, 00:26:27.720 "ddgst": false 00:26:27.720 }, 00:26:27.720 "method": "bdev_nvme_attach_controller" 00:26:27.720 }' 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:27.720 22:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.720 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:27.720 ... 00:26:27.720 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:27.720 ... 00:26:27.720 fio-3.35 00:26:27.720 Starting 4 threads 00:26:32.980 00:26:32.980 filename0: (groupid=0, jobs=1): err= 0: pid=1379109: Mon Jul 15 22:50:15 2024 00:26:32.980 read: IOPS=1863, BW=14.6MiB/s (15.3MB/s)(72.8MiB/5002msec) 00:26:32.980 slat (nsec): min=7028, max=45522, avg=12181.40, stdev=5785.45 00:26:32.980 clat (usec): min=1419, max=8557, avg=4254.66, stdev=711.32 00:26:32.980 lat (usec): min=1430, max=8571, avg=4266.85, stdev=709.88 00:26:32.980 clat percentiles (usec): 00:26:32.980 | 1.00th=[ 3163], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3752], 00:26:32.980 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4178], 00:26:32.980 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 5800], 95.00th=[ 5866], 00:26:32.980 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6783], 99.95th=[ 6783], 00:26:32.980 | 99.99th=[ 8586] 00:26:32.980 bw ( KiB/s): min=14304, max=15392, per=25.29%, avg=14897.78, stdev=339.64, samples=9 00:26:32.980 iops : min= 1788, max= 1924, avg=1862.22, stdev=42.46, samples=9 00:26:32.980 lat (msec) : 2=0.06%, 4=41.64%, 10=58.30% 00:26:32.980 cpu : usr=94.72%, sys=4.82%, ctx=10, majf=0, minf=52 00:26:32.980 IO depths : 1=0.1%, 2=2.3%, 4=68.5%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.980 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.980 issued rwts: total=9323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.980 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:32.980 filename0: (groupid=0, jobs=1): err= 0: pid=1379110: Mon Jul 15 22:50:15 2024 00:26:32.980 read: IOPS=1859, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5004msec) 00:26:32.980 slat (nsec): min=7237, max=52915, avg=12183.63, stdev=5379.71 00:26:32.980 clat (usec): min=2425, max=7274, avg=4263.73, stdev=733.78 00:26:32.980 lat (usec): min=2438, max=7287, avg=4275.92, stdev=733.81 00:26:32.980 clat percentiles (usec): 00:26:32.980 | 1.00th=[ 3130], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3785], 00:26:32.980 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4178], 00:26:32.980 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 5800], 95.00th=[ 5932], 00:26:32.980 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 6718], 99.95th=[ 6783], 00:26:32.980 | 99.99th=[ 7242] 00:26:32.980 bw ( KiB/s): min=14419, max=15568, per=25.25%, avg=14877.10, stdev=330.87, samples=10 00:26:32.980 iops : min= 1802, max= 1946, avg=1859.60, stdev=41.42, samples=10 00:26:32.980 lat (msec) : 4=50.39%, 10=49.61% 00:26:32.980 cpu : usr=94.76%, sys=4.74%, ctx=11, majf=0, minf=32 00:26:32.980 IO depths : 1=0.1%, 2=1.8%, 4=70.6%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.980 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.980 issued rwts: total=9305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.980 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:32.980 filename1: (groupid=0, jobs=1): err= 0: pid=1379111: Mon Jul 15 22:50:15 2024 00:26:32.980 read: IOPS=1806, BW=14.1MiB/s (14.8MB/s)(70.6MiB/5002msec) 00:26:32.980 slat (usec): min=7, max=112, avg=13.84, stdev= 6.61 00:26:32.980 clat (usec): min=1544, max=45921, avg=4385.34, stdev=1467.25 00:26:32.980 lat (usec): min=1556, max=45945, avg=4399.18, stdev=1466.10 00:26:32.980 clat percentiles (usec): 00:26:32.980 | 1.00th=[ 3425], 5.00th=[ 3687], 10.00th=[ 3752], 20.00th=[ 3818], 00:26:32.980 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4113], 00:26:32.980 | 70.00th=[ 4228], 80.00th=[ 4686], 90.00th=[ 5866], 95.00th=[ 5997], 00:26:32.980 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 7635], 99.95th=[45876], 00:26:32.980 | 99.99th=[45876] 00:26:32.980 bw ( KiB/s): min=12440, max=15216, per=24.54%, avg=14455.20, stdev=855.83, samples=10 00:26:32.980 iops : min= 1555, max= 1902, avg=1806.90, stdev=106.98, samples=10 00:26:32.980 lat (msec) : 2=0.01%, 4=46.76%, 10=53.14%, 50=0.09% 00:26:32.980 cpu : usr=95.14%, sys=4.26%, ctx=9, majf=0, minf=62 00:26:32.980 IO depths : 1=0.1%, 2=1.0%, 4=71.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.980 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.980 issued rwts: total=9038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.980 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:32.980 filename1: (groupid=0, jobs=1): err= 0: pid=1379112: Mon Jul 15 22:50:15 2024 00:26:32.980 read: IOPS=1835, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5001msec) 00:26:32.980 slat (nsec): min=7023, max=52234, avg=11916.63, stdev=5548.40 00:26:32.980 clat (usec): min=1198, max=7673, avg=4320.94, stdev=772.55 00:26:32.980 lat (usec): min=1206, max=7693, avg=4332.86, stdev=771.13 00:26:32.980 clat percentiles (usec): 00:26:32.980 | 1.00th=[ 3294], 5.00th=[ 3687], 10.00th=[ 3752], 20.00th=[ 3851], 00:26:32.980 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4113], 00:26:32.980 | 70.00th=[ 4228], 80.00th=[ 4555], 90.00th=[ 5800], 95.00th=[ 5997], 00:26:32.980 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7308], 99.95th=[ 7373], 00:26:32.980 | 99.99th=[ 7701] 00:26:32.980 bw ( KiB/s): min=13632, max=15056, per=24.86%, avg=14643.11, stdev=487.81, samples=9 00:26:32.980 iops : min= 1704, max= 1882, avg=1830.33, stdev=61.05, samples=9 00:26:32.980 lat (msec) : 2=0.04%, 4=49.23%, 10=50.72% 00:26:32.980 cpu : usr=94.94%, sys=4.62%, ctx=6, majf=0, minf=42 00:26:32.980 IO depths : 1=0.1%, 2=1.7%, 4=70.9%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.980 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.980 issued rwts: total=9181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.980 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:32.980 00:26:32.980 Run status group 0 (all jobs): 00:26:32.980 READ: bw=57.5MiB/s (60.3MB/s), 14.1MiB/s-14.6MiB/s (14.8MB/s-15.3MB/s), io=288MiB (302MB), run=5001-5004msec 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:32.980 00:26:32.980 real 0m24.152s 00:26:32.980 user 4m30.127s 00:26:32.980 sys 0m7.849s 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:32.980 22:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.980 ************************************ 00:26:32.980 END TEST fio_dif_rand_params 00:26:32.980 ************************************ 00:26:32.980 22:50:16 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:26:32.980 22:50:16 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:32.980 22:50:16 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:26:32.980 22:50:16 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:32.980 22:50:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:32.980 ************************************ 00:26:32.980 START TEST fio_dif_digest 00:26:32.980 ************************************ 00:26:32.980 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1117 -- # fio_dif_digest 00:26:32.980 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:32.981 bdev_null0 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:32.981 [2024-07-15 22:50:16.262298] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.981 { 00:26:32.981 "params": { 00:26:32.981 "name": "Nvme$subsystem", 00:26:32.981 "trtype": "$TEST_TRANSPORT", 00:26:32.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.981 "adrfam": "ipv4", 00:26:32.981 "trsvcid": "$NVMF_PORT", 00:26:32.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.981 "hdgst": ${hdgst:-false}, 00:26:32.981 "ddgst": ${ddgst:-false} 00:26:32.981 }, 00:26:32.981 "method": "bdev_nvme_attach_controller" 00:26:32.981 } 00:26:32.981 EOF 00:26:32.981 )") 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local sanitizers 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # shift 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local asan_lib= 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # grep libasan 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:32.981 "params": { 00:26:32.981 "name": "Nvme0", 00:26:32.981 "trtype": "tcp", 00:26:32.981 "traddr": "10.0.0.2", 00:26:32.981 "adrfam": "ipv4", 00:26:32.981 "trsvcid": "4420", 00:26:32.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:32.981 "hdgst": true, 00:26:32.981 "ddgst": true 00:26:32.981 }, 00:26:32.981 "method": "bdev_nvme_attach_controller" 00:26:32.981 }' 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # asan_lib= 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # asan_lib= 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:32.981 22:50:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:33.240 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:33.240 ... 00:26:33.240 fio-3.35 00:26:33.240 Starting 3 threads 00:26:45.434 00:26:45.434 filename0: (groupid=0, jobs=1): err= 0: pid=1379863: Mon Jul 15 22:50:27 2024 00:26:45.434 read: IOPS=190, BW=23.8MiB/s (24.9MB/s)(239MiB/10046msec) 00:26:45.434 slat (nsec): min=7768, max=45410, avg=14062.81, stdev=3556.82 00:26:45.434 clat (usec): min=8878, max=58261, avg=15737.67, stdev=5310.30 00:26:45.434 lat (usec): min=8906, max=58275, avg=15751.73, stdev=5310.36 00:26:45.434 clat percentiles (usec): 00:26:45.434 | 1.00th=[10159], 5.00th=[11076], 10.00th=[11994], 20.00th=[13829], 00:26:45.434 | 30.00th=[14615], 40.00th=[15139], 50.00th=[15533], 60.00th=[15795], 00:26:45.434 | 70.00th=[16188], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:26:45.434 | 99.00th=[54789], 99.50th=[57410], 99.90th=[58459], 99.95th=[58459], 00:26:45.434 | 99.99th=[58459] 00:26:45.434 bw ( KiB/s): min=18688, max=27392, per=33.24%, avg=24422.40, stdev=1698.92, samples=20 00:26:45.434 iops : min= 146, max= 214, avg=190.80, stdev=13.27, samples=20 00:26:45.434 lat (msec) : 10=0.73%, 20=97.59%, 50=0.16%, 100=1.52% 00:26:45.434 cpu : usr=90.01%, sys=9.50%, ctx=19, majf=0, minf=116 00:26:45.434 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.434 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.434 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:45.434 filename0: (groupid=0, jobs=1): err= 0: pid=1379864: Mon Jul 15 22:50:27 2024 00:26:45.434 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(227MiB/10008msec) 00:26:45.434 slat (nsec): min=7543, max=36325, avg=13914.80, stdev=3199.49 00:26:45.434 clat (usec): min=8270, max=95145, avg=16513.09, stdev=8123.89 00:26:45.434 lat (usec): min=8283, max=95159, avg=16527.00, stdev=8123.88 00:26:45.434 clat percentiles (usec): 00:26:45.434 | 1.00th=[10028], 5.00th=[11863], 10.00th=[13304], 20.00th=[14091], 00:26:45.434 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15533], 00:26:45.434 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16909], 95.00th=[17957], 00:26:45.434 | 99.00th=[56886], 99.50th=[57934], 99.90th=[60031], 99.95th=[94897], 00:26:45.434 | 99.99th=[94897] 00:26:45.434 bw ( KiB/s): min=20480, max=25600, per=31.58%, avg=23206.40, stdev=1287.46, samples=20 00:26:45.434 iops : min= 160, max= 200, avg=181.30, stdev=10.06, samples=20 00:26:45.434 lat (msec) : 10=0.94%, 20=95.32%, 100=3.74% 00:26:45.434 cpu : usr=90.31%, sys=9.21%, ctx=24, majf=0, minf=136 00:26:45.434 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.434 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.434 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:45.434 filename0: (groupid=0, jobs=1): err= 0: pid=1379865: Mon Jul 15 22:50:27 2024 00:26:45.434 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(255MiB/10050msec) 00:26:45.434 slat (nsec): min=5538, max=36428, avg=13617.50, stdev=3217.32 00:26:45.434 clat (usec): min=7053, max=57300, avg=14717.92, stdev=2783.26 00:26:45.434 lat (usec): min=7065, max=57313, avg=14731.54, stdev=2783.44 00:26:45.434 clat percentiles (usec): 00:26:45.434 | 1.00th=[ 9503], 5.00th=[10683], 10.00th=[11207], 20.00th=[13173], 00:26:45.434 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15139], 60.00th=[15533], 00:26:45.434 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:26:45.434 | 99.00th=[18220], 99.50th=[18482], 99.90th=[55313], 99.95th=[56886], 00:26:45.434 | 99.99th=[57410] 00:26:45.434 bw ( KiB/s): min=23808, max=28928, per=35.54%, avg=26114.70, stdev=1331.71, samples=20 00:26:45.434 iops : min= 186, max= 226, avg=204.00, stdev=10.38, samples=20 00:26:45.434 lat (msec) : 10=2.25%, 20=97.50%, 50=0.10%, 100=0.15% 00:26:45.434 cpu : usr=90.22%, sys=9.29%, ctx=16, majf=0, minf=120 00:26:45.434 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.434 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.434 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:45.434 00:26:45.434 Run status group 0 (all jobs): 00:26:45.434 READ: bw=71.8MiB/s (75.2MB/s), 22.7MiB/s-25.4MiB/s (23.8MB/s-26.6MB/s), io=721MiB (756MB), run=10008-10050msec 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:45.434 00:26:45.434 real 0m11.333s 00:26:45.434 user 0m28.409s 00:26:45.434 sys 0m3.104s 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:45.434 22:50:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.434 ************************************ 00:26:45.434 END TEST fio_dif_digest 00:26:45.434 ************************************ 00:26:45.434 22:50:27 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:26:45.434 22:50:27 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:45.434 22:50:27 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:45.434 22:50:27 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.434 22:50:27 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:45.434 22:50:27 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.434 22:50:27 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:45.434 22:50:27 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.434 22:50:27 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.434 rmmod nvme_tcp 00:26:45.434 rmmod nvme_fabrics 00:26:45.434 rmmod nvme_keyring 00:26:45.435 22:50:27 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.435 22:50:27 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:45.435 22:50:27 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:45.435 22:50:27 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1373795 ']' 00:26:45.435 22:50:27 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1373795 00:26:45.435 22:50:27 nvmf_dif -- common/autotest_common.sh@942 -- # '[' -z 1373795 ']' 00:26:45.435 22:50:27 nvmf_dif -- common/autotest_common.sh@946 -- # kill -0 1373795 00:26:45.435 22:50:27 nvmf_dif -- common/autotest_common.sh@947 -- # uname 00:26:45.435 22:50:27 nvmf_dif -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:45.435 22:50:27 nvmf_dif -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1373795 00:26:45.435 22:50:27 nvmf_dif -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:26:45.435 22:50:27 nvmf_dif -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:26:45.435 22:50:27 nvmf_dif -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1373795' 00:26:45.435 killing process with pid 1373795 00:26:45.435 22:50:27 nvmf_dif -- common/autotest_common.sh@961 -- # kill 1373795 00:26:45.435 22:50:27 nvmf_dif -- common/autotest_common.sh@966 -- # wait 1373795 00:26:45.435 22:50:27 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:45.435 22:50:27 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:45.692 Waiting for block devices as requested 00:26:45.692 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:45.692 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:45.951 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:45.951 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:45.951 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:46.210 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:46.210 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:46.210 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:46.210 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:46.469 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:46.469 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:46.469 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:46.469 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:46.728 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:46.728 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:46.728 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:46.728 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:46.986 22:50:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.986 22:50:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.986 22:50:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.986 22:50:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.986 22:50:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.986 22:50:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:46.986 22:50:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.886 22:50:32 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:48.886 00:26:48.886 real 1m6.948s 00:26:48.886 user 6m25.833s 00:26:48.886 sys 0m20.635s 00:26:48.886 22:50:32 nvmf_dif -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:48.887 22:50:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:48.887 ************************************ 00:26:48.887 END TEST nvmf_dif 00:26:48.887 ************************************ 00:26:49.154 22:50:32 -- common/autotest_common.sh@1136 -- # return 0 00:26:49.154 22:50:32 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:49.154 22:50:32 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:26:49.154 22:50:32 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:49.154 22:50:32 -- common/autotest_common.sh@10 -- # set +x 00:26:49.154 ************************************ 00:26:49.154 START TEST nvmf_abort_qd_sizes 00:26:49.154 ************************************ 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:49.154 * Looking for test storage... 00:26:49.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.154 22:50:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:51.102 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:51.103 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:51.103 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:51.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:51.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:51.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:26:51.103 00:26:51.103 --- 10.0.0.2 ping statistics --- 00:26:51.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.103 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:26:51.103 00:26:51.103 --- 10.0.0.1 ping statistics --- 00:26:51.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.103 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:51.103 22:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:52.506 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:52.506 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:52.506 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:52.506 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:52.506 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:52.506 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:52.506 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:52.506 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:52.506 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:52.506 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:52.506 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:52.506 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:52.506 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:52.506 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:52.506 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:52.506 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:53.442 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1384767 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1384767 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@823 -- # '[' -z 1384767 ']' 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:53.442 22:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:53.442 [2024-07-15 22:50:36.869428] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:26:53.442 [2024-07-15 22:50:36.869496] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.442 [2024-07-15 22:50:36.936189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.701 [2024-07-15 22:50:37.047721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.701 [2024-07-15 22:50:37.047775] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.701 [2024-07-15 22:50:37.047802] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.701 [2024-07-15 22:50:37.047813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.701 [2024-07-15 22:50:37.047822] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.701 [2024-07-15 22:50:37.047898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.701 [2024-07-15 22:50:37.047934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.701 [2024-07-15 22:50:37.047965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.701 [2024-07-15 22:50:37.047969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # return 0 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:53.701 22:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:53.961 ************************************ 00:26:53.961 START TEST spdk_target_abort 00:26:53.961 ************************************ 00:26:53.961 22:50:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1117 -- # spdk_target 00:26:53.961 22:50:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:53.961 22:50:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:26:53.961 22:50:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:53.961 22:50:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.244 spdk_targetn1 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.244 [2024-07-15 22:50:40.058125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.244 [2024-07-15 22:50:40.090448] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.244 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:57.245 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.245 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:57.245 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.245 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:57.245 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.245 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:57.245 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:57.245 22:50:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:00.526 Initializing NVMe Controllers 00:27:00.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:00.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:00.526 Initialization complete. Launching workers. 00:27:00.526 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10030, failed: 0 00:27:00.526 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1220, failed to submit 8810 00:27:00.526 success 783, unsuccess 437, failed 0 00:27:00.526 22:50:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:00.526 22:50:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:03.826 Initializing NVMe Controllers 00:27:03.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:03.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:03.826 Initialization complete. Launching workers. 00:27:03.826 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8640, failed: 0 00:27:03.826 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 7397 00:27:03.826 success 316, unsuccess 927, failed 0 00:27:03.826 22:50:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:03.826 22:50:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:06.362 Initializing NVMe Controllers 00:27:06.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:06.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:06.362 Initialization complete. Launching workers. 00:27:06.362 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29670, failed: 0 00:27:06.362 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2720, failed to submit 26950 00:27:06.362 success 484, unsuccess 2236, failed 0 00:27:06.362 22:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:06.362 22:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:06.362 22:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.362 22:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:06.362 22:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:06.362 22:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:06.362 22:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:07.734 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:07.734 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1384767 00:27:07.734 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@942 -- # '[' -z 1384767 ']' 00:27:07.734 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # kill -0 1384767 00:27:07.734 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # uname 00:27:07.734 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:07.734 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1384767 00:27:07.991 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:27:07.991 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:27:07.991 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1384767' 00:27:07.991 killing process with pid 1384767 00:27:07.991 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@961 -- # kill 1384767 00:27:07.991 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # wait 1384767 00:27:08.251 00:27:08.251 real 0m14.326s 00:27:08.251 user 0m54.093s 00:27:08.251 sys 0m2.603s 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:08.251 ************************************ 00:27:08.251 END TEST spdk_target_abort 00:27:08.251 ************************************ 00:27:08.251 22:50:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1136 -- # return 0 00:27:08.251 22:50:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:08.251 22:50:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:27:08.251 22:50:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:08.251 22:50:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:08.251 ************************************ 00:27:08.251 START TEST kernel_target_abort 00:27:08.251 ************************************ 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1117 -- # kernel_target 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:08.251 22:50:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:09.660 Waiting for block devices as requested 00:27:09.660 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:09.660 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:09.660 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:09.660 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:09.920 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:09.920 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:09.920 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:09.920 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:09.920 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:10.178 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:10.178 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:10.178 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:10.437 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:10.437 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:10.437 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:10.437 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:10.695 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:10.695 No valid GPT data, bailing 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:10.695 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:10.952 00:27:10.952 Discovery Log Number of Records 2, Generation counter 2 00:27:10.952 =====Discovery Log Entry 0====== 00:27:10.952 trtype: tcp 00:27:10.952 adrfam: ipv4 00:27:10.952 subtype: current discovery subsystem 00:27:10.952 treq: not specified, sq flow control disable supported 00:27:10.952 portid: 1 00:27:10.952 trsvcid: 4420 00:27:10.952 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:10.952 traddr: 10.0.0.1 00:27:10.952 eflags: none 00:27:10.952 sectype: none 00:27:10.952 =====Discovery Log Entry 1====== 00:27:10.952 trtype: tcp 00:27:10.952 adrfam: ipv4 00:27:10.952 subtype: nvme subsystem 00:27:10.952 treq: not specified, sq flow control disable supported 00:27:10.952 portid: 1 00:27:10.952 trsvcid: 4420 00:27:10.952 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:10.952 traddr: 10.0.0.1 00:27:10.952 eflags: none 00:27:10.952 sectype: none 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:10.952 22:50:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:14.239 Initializing NVMe Controllers 00:27:14.239 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:14.239 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:14.239 Initialization complete. Launching workers. 00:27:14.239 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28337, failed: 0 00:27:14.239 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28337, failed to submit 0 00:27:14.239 success 0, unsuccess 28337, failed 0 00:27:14.239 22:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:14.239 22:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:17.517 Initializing NVMe Controllers 00:27:17.517 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:17.517 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:17.517 Initialization complete. Launching workers. 00:27:17.517 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57809, failed: 0 00:27:17.517 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14554, failed to submit 43255 00:27:17.517 success 0, unsuccess 14554, failed 0 00:27:17.518 22:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:17.518 22:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:20.794 Initializing NVMe Controllers 00:27:20.794 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:20.794 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:20.794 Initialization complete. Launching workers. 00:27:20.794 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56185, failed: 0 00:27:20.794 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14018, failed to submit 42167 00:27:20.794 success 0, unsuccess 14018, failed 0 00:27:20.794 22:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:20.794 22:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:20.794 22:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:20.794 22:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:20.794 22:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:20.794 22:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:20.794 22:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:20.794 22:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:20.794 22:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:20.794 22:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:21.360 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:21.360 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:21.360 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:21.360 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:21.360 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:21.360 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:21.360 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:21.360 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:21.360 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:21.360 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:21.619 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:21.619 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:21.619 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:21.619 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:21.619 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:21.619 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:22.555 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:22.555 00:27:22.555 real 0m14.343s 00:27:22.555 user 0m4.727s 00:27:22.555 sys 0m3.440s 00:27:22.555 22:51:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:22.555 22:51:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.555 ************************************ 00:27:22.555 END TEST kernel_target_abort 00:27:22.555 ************************************ 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1136 -- # return 0 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.555 rmmod nvme_tcp 00:27:22.555 rmmod nvme_fabrics 00:27:22.555 rmmod nvme_keyring 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1384767 ']' 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1384767 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@942 -- # '[' -z 1384767 ']' 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # kill -0 1384767 00:27:22.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1384767) - No such process 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@969 -- # echo 'Process with pid 1384767 is not found' 00:27:22.555 Process with pid 1384767 is not found 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:22.555 22:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:23.487 Waiting for block devices as requested 00:27:23.747 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:23.747 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:24.005 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:24.005 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:24.005 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:24.005 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:24.263 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:24.263 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:24.263 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:24.263 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:24.522 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:24.522 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:24.522 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:24.522 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:24.780 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:24.780 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:24.780 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:25.038 22:51:08 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.038 22:51:08 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.038 22:51:08 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.038 22:51:08 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.038 22:51:08 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.038 22:51:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:25.038 22:51:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.935 22:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:26.935 00:27:26.935 real 0m37.932s 00:27:26.935 user 1m0.851s 00:27:26.935 sys 0m9.278s 00:27:26.935 22:51:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:26.935 22:51:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:26.935 ************************************ 00:27:26.935 END TEST nvmf_abort_qd_sizes 00:27:26.935 ************************************ 00:27:26.935 22:51:10 -- common/autotest_common.sh@1136 -- # return 0 00:27:26.935 22:51:10 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:26.935 22:51:10 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:27:26.935 22:51:10 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:26.935 22:51:10 -- common/autotest_common.sh@10 -- # set +x 00:27:26.935 ************************************ 00:27:26.935 START TEST keyring_file 00:27:26.935 ************************************ 00:27:26.935 22:51:10 keyring_file -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:26.935 * Looking for test storage... 00:27:27.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.193 22:51:10 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.193 22:51:10 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.193 22:51:10 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.193 22:51:10 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.193 22:51:10 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.193 22:51:10 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.193 22:51:10 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:27.193 22:51:10 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8f8IO3vPbJ 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8f8IO3vPbJ 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8f8IO3vPbJ 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.8f8IO3vPbJ 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OHuqHpYu51 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:27.193 22:51:10 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OHuqHpYu51 00:27:27.193 22:51:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OHuqHpYu51 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.OHuqHpYu51 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@30 -- # tgtpid=1391152 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:27.193 22:51:10 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1391152 00:27:27.193 22:51:10 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 1391152 ']' 00:27:27.193 22:51:10 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.193 22:51:10 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:27.193 22:51:10 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.194 22:51:10 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:27.194 22:51:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:27.194 [2024-07-15 22:51:10.588200] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:27:27.194 [2024-07-15 22:51:10.588296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391152 ] 00:27:27.194 [2024-07-15 22:51:10.644686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.452 [2024-07-15 22:51:10.758791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:27:27.717 22:51:11 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:27.717 [2024-07-15 22:51:11.020703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.717 null0 00:27:27.717 [2024-07-15 22:51:11.052752] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:27.717 [2024-07-15 22:51:11.053242] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:27.717 [2024-07-15 22:51:11.060759] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:27.717 22:51:11 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@645 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:27.717 [2024-07-15 22:51:11.068773] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:27.717 request: 00:27:27.717 { 00:27:27.717 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:27.717 "secure_channel": false, 00:27:27.717 "listen_address": { 00:27:27.717 "trtype": "tcp", 00:27:27.717 "traddr": "127.0.0.1", 00:27:27.717 "trsvcid": "4420" 00:27:27.717 }, 00:27:27.717 "method": "nvmf_subsystem_add_listener", 00:27:27.717 "req_id": 1 00:27:27.717 } 00:27:27.717 Got JSON-RPC error response 00:27:27.717 response: 00:27:27.717 { 00:27:27.717 "code": -32602, 00:27:27.717 "message": "Invalid parameters" 00:27:27.717 } 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:27:27.717 22:51:11 keyring_file -- keyring/file.sh@46 -- # bperfpid=1391157 00:27:27.717 22:51:11 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:27.717 22:51:11 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1391157 /var/tmp/bperf.sock 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 1391157 ']' 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:27.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:27.717 22:51:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:27.717 [2024-07-15 22:51:11.117000] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:27:27.717 [2024-07-15 22:51:11.117077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391157 ] 00:27:27.717 [2024-07-15 22:51:11.177400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.974 [2024-07-15 22:51:11.295640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.974 22:51:11 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:27.974 22:51:11 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:27:27.974 22:51:11 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8f8IO3vPbJ 00:27:27.974 22:51:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8f8IO3vPbJ 00:27:28.231 22:51:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OHuqHpYu51 00:27:28.231 22:51:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OHuqHpYu51 00:27:28.488 22:51:11 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:28.488 22:51:11 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:28.488 22:51:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:28.488 22:51:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:28.488 22:51:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:28.744 22:51:12 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.8f8IO3vPbJ == \/\t\m\p\/\t\m\p\.\8\f\8\I\O\3\v\P\b\J ]] 00:27:28.744 22:51:12 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:28.744 22:51:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:28.744 22:51:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:28.744 22:51:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:28.744 22:51:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:29.002 22:51:12 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.OHuqHpYu51 == \/\t\m\p\/\t\m\p\.\O\H\u\q\H\p\Y\u\5\1 ]] 00:27:29.002 22:51:12 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:29.002 22:51:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:29.002 22:51:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:29.002 22:51:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.002 22:51:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.002 22:51:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:29.259 22:51:12 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:29.259 22:51:12 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:29.259 22:51:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:29.259 22:51:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:29.259 22:51:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.259 22:51:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.259 22:51:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:29.516 22:51:12 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:29.516 22:51:12 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:29.516 22:51:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:29.774 [2024-07-15 22:51:13.150627] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:29.774 nvme0n1 00:27:29.774 22:51:13 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:29.774 22:51:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:29.774 22:51:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:29.774 22:51:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.774 22:51:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.774 22:51:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.031 22:51:13 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:30.031 22:51:13 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:30.031 22:51:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:30.031 22:51:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.031 22:51:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.031 22:51:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.031 22:51:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:30.287 22:51:13 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:30.287 22:51:13 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:30.546 Running I/O for 1 seconds... 00:27:31.482 00:27:31.482 Latency(us) 00:27:31.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.482 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:31.482 nvme0n1 : 1.01 4941.94 19.30 0.00 0.00 25747.30 9951.76 40389.59 00:27:31.482 =================================================================================================================== 00:27:31.482 Total : 4941.94 19.30 0.00 0.00 25747.30 9951.76 40389.59 00:27:31.482 0 00:27:31.482 22:51:14 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:31.482 22:51:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:31.740 22:51:15 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:31.740 22:51:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:31.740 22:51:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.740 22:51:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.740 22:51:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.740 22:51:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:31.998 22:51:15 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:31.998 22:51:15 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:31.998 22:51:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:31.998 22:51:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.998 22:51:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.998 22:51:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:31.999 22:51:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.257 22:51:15 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:32.257 22:51:15 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:32.257 22:51:15 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:27:32.257 22:51:15 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:32.257 22:51:15 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:27:32.257 22:51:15 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:32.257 22:51:15 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:27:32.257 22:51:15 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:32.257 22:51:15 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:32.257 22:51:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:32.516 [2024-07-15 22:51:15.862649] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:32.516 [2024-07-15 22:51:15.862868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9809a0 (107): Transport endpoint is not connected 00:27:32.516 [2024-07-15 22:51:15.863858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9809a0 (9): Bad file descriptor 00:27:32.516 [2024-07-15 22:51:15.864857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:32.516 [2024-07-15 22:51:15.864891] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:32.516 [2024-07-15 22:51:15.864908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:32.516 request: 00:27:32.516 { 00:27:32.516 "name": "nvme0", 00:27:32.516 "trtype": "tcp", 00:27:32.516 "traddr": "127.0.0.1", 00:27:32.516 "adrfam": "ipv4", 00:27:32.516 "trsvcid": "4420", 00:27:32.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.516 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.516 "prchk_reftag": false, 00:27:32.516 "prchk_guard": false, 00:27:32.516 "hdgst": false, 00:27:32.516 "ddgst": false, 00:27:32.516 "psk": "key1", 00:27:32.516 "method": "bdev_nvme_attach_controller", 00:27:32.516 "req_id": 1 00:27:32.516 } 00:27:32.516 Got JSON-RPC error response 00:27:32.516 response: 00:27:32.516 { 00:27:32.516 "code": -5, 00:27:32.516 "message": "Input/output error" 00:27:32.516 } 00:27:32.516 22:51:15 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:27:32.516 22:51:15 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:27:32.516 22:51:15 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:27:32.516 22:51:15 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:27:32.516 22:51:15 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:32.516 22:51:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:32.516 22:51:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:32.516 22:51:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:32.516 22:51:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:32.516 22:51:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.775 22:51:16 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:32.775 22:51:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:32.775 22:51:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:32.775 22:51:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:32.775 22:51:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:32.775 22:51:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.775 22:51:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:33.033 22:51:16 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:33.033 22:51:16 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:33.033 22:51:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:33.291 22:51:16 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:33.291 22:51:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:33.548 22:51:16 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:33.548 22:51:16 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:33.548 22:51:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.806 22:51:17 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:33.806 22:51:17 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.8f8IO3vPbJ 00:27:33.806 22:51:17 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.8f8IO3vPbJ 00:27:33.806 22:51:17 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:27:33.806 22:51:17 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.8f8IO3vPbJ 00:27:33.806 22:51:17 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:27:33.806 22:51:17 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:33.806 22:51:17 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:27:33.806 22:51:17 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:33.806 22:51:17 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8f8IO3vPbJ 00:27:33.806 22:51:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8f8IO3vPbJ 00:27:34.064 [2024-07-15 22:51:17.366808] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8f8IO3vPbJ': 0100660 00:27:34.064 [2024-07-15 22:51:17.366844] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:34.064 request: 00:27:34.064 { 00:27:34.064 "name": "key0", 00:27:34.064 "path": "/tmp/tmp.8f8IO3vPbJ", 00:27:34.064 "method": "keyring_file_add_key", 00:27:34.064 "req_id": 1 00:27:34.064 } 00:27:34.064 Got JSON-RPC error response 00:27:34.064 response: 00:27:34.064 { 00:27:34.064 "code": -1, 00:27:34.064 "message": "Operation not permitted" 00:27:34.064 } 00:27:34.064 22:51:17 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:27:34.064 22:51:17 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:27:34.064 22:51:17 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:27:34.064 22:51:17 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:27:34.064 22:51:17 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.8f8IO3vPbJ 00:27:34.064 22:51:17 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8f8IO3vPbJ 00:27:34.064 22:51:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8f8IO3vPbJ 00:27:34.322 22:51:17 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.8f8IO3vPbJ 00:27:34.322 22:51:17 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:34.322 22:51:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:34.322 22:51:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.322 22:51:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.322 22:51:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.322 22:51:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.579 22:51:17 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:34.580 22:51:17 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:34.580 22:51:17 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:27:34.580 22:51:17 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:34.580 22:51:17 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:27:34.580 22:51:17 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:34.580 22:51:17 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:27:34.580 22:51:17 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:34.580 22:51:17 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:34.580 22:51:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:34.838 [2024-07-15 22:51:18.096780] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.8f8IO3vPbJ': No such file or directory 00:27:34.838 [2024-07-15 22:51:18.096816] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:34.838 [2024-07-15 22:51:18.096857] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:34.838 [2024-07-15 22:51:18.096868] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:34.838 [2024-07-15 22:51:18.096913] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:34.838 request: 00:27:34.838 { 00:27:34.838 "name": "nvme0", 00:27:34.838 "trtype": "tcp", 00:27:34.838 "traddr": "127.0.0.1", 00:27:34.838 "adrfam": "ipv4", 00:27:34.838 "trsvcid": "4420", 00:27:34.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:34.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:34.838 "prchk_reftag": false, 00:27:34.838 "prchk_guard": false, 00:27:34.838 "hdgst": false, 00:27:34.838 "ddgst": false, 00:27:34.838 "psk": "key0", 00:27:34.838 "method": "bdev_nvme_attach_controller", 00:27:34.838 "req_id": 1 00:27:34.838 } 00:27:34.838 Got JSON-RPC error response 00:27:34.838 response: 00:27:34.838 { 00:27:34.838 "code": -19, 00:27:34.838 "message": "No such device" 00:27:34.838 } 00:27:34.838 22:51:18 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:27:34.838 22:51:18 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:27:34.838 22:51:18 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:27:34.838 22:51:18 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:27:34.838 22:51:18 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:34.838 22:51:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:35.097 22:51:18 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:35.097 22:51:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:35.097 22:51:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:35.097 22:51:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:35.097 22:51:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:35.097 22:51:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:35.097 22:51:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NP1LzjaUys 00:27:35.097 22:51:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:35.097 22:51:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:35.097 22:51:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:35.097 22:51:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:35.097 22:51:18 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:35.097 22:51:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:35.097 22:51:18 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:35.097 22:51:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NP1LzjaUys 00:27:35.097 22:51:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NP1LzjaUys 00:27:35.097 22:51:18 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.NP1LzjaUys 00:27:35.097 22:51:18 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NP1LzjaUys 00:27:35.097 22:51:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NP1LzjaUys 00:27:35.354 22:51:18 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.354 22:51:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.610 nvme0n1 00:27:35.610 22:51:18 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:35.610 22:51:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:35.610 22:51:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:35.610 22:51:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.610 22:51:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.610 22:51:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:35.865 22:51:19 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:35.865 22:51:19 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:35.865 22:51:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:36.122 22:51:19 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:36.122 22:51:19 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:36.122 22:51:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.122 22:51:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.122 22:51:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:36.380 22:51:19 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:36.380 22:51:19 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:36.380 22:51:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:36.380 22:51:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.380 22:51:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.380 22:51:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:36.380 22:51:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.639 22:51:19 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:36.639 22:51:19 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:36.639 22:51:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:36.895 22:51:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:36.895 22:51:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.895 22:51:20 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:37.153 22:51:20 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:37.153 22:51:20 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NP1LzjaUys 00:27:37.153 22:51:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NP1LzjaUys 00:27:37.409 22:51:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OHuqHpYu51 00:27:37.409 22:51:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OHuqHpYu51 00:27:37.666 22:51:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:37.666 22:51:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:37.926 nvme0n1 00:27:37.926 22:51:21 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:37.926 22:51:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:38.182 22:51:21 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:38.182 "subsystems": [ 00:27:38.182 { 00:27:38.182 "subsystem": "keyring", 00:27:38.182 "config": [ 00:27:38.182 { 00:27:38.182 "method": "keyring_file_add_key", 00:27:38.182 "params": { 00:27:38.182 "name": "key0", 00:27:38.182 "path": "/tmp/tmp.NP1LzjaUys" 00:27:38.182 } 00:27:38.182 }, 00:27:38.182 { 00:27:38.182 "method": "keyring_file_add_key", 00:27:38.182 "params": { 00:27:38.182 "name": "key1", 00:27:38.182 "path": "/tmp/tmp.OHuqHpYu51" 00:27:38.182 } 00:27:38.182 } 00:27:38.182 ] 00:27:38.182 }, 00:27:38.182 { 00:27:38.182 "subsystem": "iobuf", 00:27:38.182 "config": [ 00:27:38.182 { 00:27:38.182 "method": "iobuf_set_options", 00:27:38.182 "params": { 00:27:38.182 "small_pool_count": 8192, 00:27:38.182 "large_pool_count": 1024, 00:27:38.182 "small_bufsize": 8192, 00:27:38.182 "large_bufsize": 135168 00:27:38.182 } 00:27:38.182 } 00:27:38.182 ] 00:27:38.182 }, 00:27:38.182 { 00:27:38.182 "subsystem": "sock", 00:27:38.182 "config": [ 00:27:38.182 { 00:27:38.182 "method": "sock_set_default_impl", 00:27:38.182 "params": { 00:27:38.182 "impl_name": "posix" 00:27:38.182 } 00:27:38.182 }, 00:27:38.182 { 00:27:38.182 "method": "sock_impl_set_options", 00:27:38.182 "params": { 00:27:38.182 "impl_name": "ssl", 00:27:38.182 "recv_buf_size": 4096, 00:27:38.182 "send_buf_size": 4096, 00:27:38.182 "enable_recv_pipe": true, 00:27:38.182 "enable_quickack": false, 00:27:38.182 "enable_placement_id": 0, 00:27:38.182 "enable_zerocopy_send_server": true, 00:27:38.182 "enable_zerocopy_send_client": false, 00:27:38.182 "zerocopy_threshold": 0, 00:27:38.182 "tls_version": 0, 00:27:38.182 "enable_ktls": false 00:27:38.182 } 00:27:38.182 }, 00:27:38.182 { 00:27:38.182 "method": "sock_impl_set_options", 00:27:38.182 "params": { 00:27:38.182 "impl_name": "posix", 00:27:38.182 "recv_buf_size": 2097152, 00:27:38.182 "send_buf_size": 2097152, 00:27:38.182 "enable_recv_pipe": true, 00:27:38.182 "enable_quickack": false, 00:27:38.182 "enable_placement_id": 0, 00:27:38.182 "enable_zerocopy_send_server": true, 00:27:38.182 "enable_zerocopy_send_client": false, 00:27:38.182 "zerocopy_threshold": 0, 00:27:38.182 "tls_version": 0, 00:27:38.182 "enable_ktls": false 00:27:38.182 } 00:27:38.182 } 00:27:38.183 ] 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "subsystem": "vmd", 00:27:38.183 "config": [] 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "subsystem": "accel", 00:27:38.183 "config": [ 00:27:38.183 { 00:27:38.183 "method": "accel_set_options", 00:27:38.183 "params": { 00:27:38.183 "small_cache_size": 128, 00:27:38.183 "large_cache_size": 16, 00:27:38.183 "task_count": 2048, 00:27:38.183 "sequence_count": 2048, 00:27:38.183 "buf_count": 2048 00:27:38.183 } 00:27:38.183 } 00:27:38.183 ] 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "subsystem": "bdev", 00:27:38.183 "config": [ 00:27:38.183 { 00:27:38.183 "method": "bdev_set_options", 00:27:38.183 "params": { 00:27:38.183 "bdev_io_pool_size": 65535, 00:27:38.183 "bdev_io_cache_size": 256, 00:27:38.183 "bdev_auto_examine": true, 00:27:38.183 "iobuf_small_cache_size": 128, 00:27:38.183 "iobuf_large_cache_size": 16 00:27:38.183 } 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "method": "bdev_raid_set_options", 00:27:38.183 "params": { 00:27:38.183 "process_window_size_kb": 1024 00:27:38.183 } 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "method": "bdev_iscsi_set_options", 00:27:38.183 "params": { 00:27:38.183 "timeout_sec": 30 00:27:38.183 } 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "method": "bdev_nvme_set_options", 00:27:38.183 "params": { 00:27:38.183 "action_on_timeout": "none", 00:27:38.183 "timeout_us": 0, 00:27:38.183 "timeout_admin_us": 0, 00:27:38.183 "keep_alive_timeout_ms": 10000, 00:27:38.183 "arbitration_burst": 0, 00:27:38.183 "low_priority_weight": 0, 00:27:38.183 "medium_priority_weight": 0, 00:27:38.183 "high_priority_weight": 0, 00:27:38.183 "nvme_adminq_poll_period_us": 10000, 00:27:38.183 "nvme_ioq_poll_period_us": 0, 00:27:38.183 "io_queue_requests": 512, 00:27:38.183 "delay_cmd_submit": true, 00:27:38.183 "transport_retry_count": 4, 00:27:38.183 "bdev_retry_count": 3, 00:27:38.183 "transport_ack_timeout": 0, 00:27:38.183 "ctrlr_loss_timeout_sec": 0, 00:27:38.183 "reconnect_delay_sec": 0, 00:27:38.183 "fast_io_fail_timeout_sec": 0, 00:27:38.183 "disable_auto_failback": false, 00:27:38.183 "generate_uuids": false, 00:27:38.183 "transport_tos": 0, 00:27:38.183 "nvme_error_stat": false, 00:27:38.183 "rdma_srq_size": 0, 00:27:38.183 "io_path_stat": false, 00:27:38.183 "allow_accel_sequence": false, 00:27:38.183 "rdma_max_cq_size": 0, 00:27:38.183 "rdma_cm_event_timeout_ms": 0, 00:27:38.183 "dhchap_digests": [ 00:27:38.183 "sha256", 00:27:38.183 "sha384", 00:27:38.183 "sha512" 00:27:38.183 ], 00:27:38.183 "dhchap_dhgroups": [ 00:27:38.183 "null", 00:27:38.183 "ffdhe2048", 00:27:38.183 "ffdhe3072", 00:27:38.183 "ffdhe4096", 00:27:38.183 "ffdhe6144", 00:27:38.183 "ffdhe8192" 00:27:38.183 ] 00:27:38.183 } 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "method": "bdev_nvme_attach_controller", 00:27:38.183 "params": { 00:27:38.183 "name": "nvme0", 00:27:38.183 "trtype": "TCP", 00:27:38.183 "adrfam": "IPv4", 00:27:38.183 "traddr": "127.0.0.1", 00:27:38.183 "trsvcid": "4420", 00:27:38.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.183 "prchk_reftag": false, 00:27:38.183 "prchk_guard": false, 00:27:38.183 "ctrlr_loss_timeout_sec": 0, 00:27:38.183 "reconnect_delay_sec": 0, 00:27:38.183 "fast_io_fail_timeout_sec": 0, 00:27:38.183 "psk": "key0", 00:27:38.183 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.183 "hdgst": false, 00:27:38.183 "ddgst": false 00:27:38.183 } 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "method": "bdev_nvme_set_hotplug", 00:27:38.183 "params": { 00:27:38.183 "period_us": 100000, 00:27:38.183 "enable": false 00:27:38.183 } 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "method": "bdev_wait_for_examine" 00:27:38.183 } 00:27:38.183 ] 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "subsystem": "nbd", 00:27:38.183 "config": [] 00:27:38.183 } 00:27:38.183 ] 00:27:38.183 }' 00:27:38.183 22:51:21 keyring_file -- keyring/file.sh@114 -- # killprocess 1391157 00:27:38.183 22:51:21 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 1391157 ']' 00:27:38.183 22:51:21 keyring_file -- common/autotest_common.sh@946 -- # kill -0 1391157 00:27:38.183 22:51:21 keyring_file -- common/autotest_common.sh@947 -- # uname 00:27:38.183 22:51:21 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:38.183 22:51:21 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1391157 00:27:38.183 22:51:21 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:27:38.183 22:51:21 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:27:38.183 22:51:21 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1391157' 00:27:38.183 killing process with pid 1391157 00:27:38.183 22:51:21 keyring_file -- common/autotest_common.sh@961 -- # kill 1391157 00:27:38.183 Received shutdown signal, test time was about 1.000000 seconds 00:27:38.183 00:27:38.183 Latency(us) 00:27:38.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.183 =================================================================================================================== 00:27:38.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:38.183 22:51:21 keyring_file -- common/autotest_common.sh@966 -- # wait 1391157 00:27:38.440 22:51:21 keyring_file -- keyring/file.sh@117 -- # bperfpid=1392621 00:27:38.441 22:51:21 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1392621 /var/tmp/bperf.sock 00:27:38.441 22:51:21 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 1392621 ']' 00:27:38.441 22:51:21 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:38.441 22:51:21 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:38.441 22:51:21 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:38.441 22:51:21 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:38.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:38.441 22:51:21 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:38.441 22:51:21 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:38.441 "subsystems": [ 00:27:38.441 { 00:27:38.441 "subsystem": "keyring", 00:27:38.441 "config": [ 00:27:38.441 { 00:27:38.441 "method": "keyring_file_add_key", 00:27:38.441 "params": { 00:27:38.441 "name": "key0", 00:27:38.441 "path": "/tmp/tmp.NP1LzjaUys" 00:27:38.441 } 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "method": "keyring_file_add_key", 00:27:38.441 "params": { 00:27:38.441 "name": "key1", 00:27:38.441 "path": "/tmp/tmp.OHuqHpYu51" 00:27:38.441 } 00:27:38.441 } 00:27:38.441 ] 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "subsystem": "iobuf", 00:27:38.441 "config": [ 00:27:38.441 { 00:27:38.441 "method": "iobuf_set_options", 00:27:38.441 "params": { 00:27:38.441 "small_pool_count": 8192, 00:27:38.441 "large_pool_count": 1024, 00:27:38.441 "small_bufsize": 8192, 00:27:38.441 "large_bufsize": 135168 00:27:38.441 } 00:27:38.441 } 00:27:38.441 ] 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "subsystem": "sock", 00:27:38.441 "config": [ 00:27:38.441 { 00:27:38.441 "method": "sock_set_default_impl", 00:27:38.441 "params": { 00:27:38.441 "impl_name": "posix" 00:27:38.441 } 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "method": "sock_impl_set_options", 00:27:38.441 "params": { 00:27:38.441 "impl_name": "ssl", 00:27:38.441 "recv_buf_size": 4096, 00:27:38.441 "send_buf_size": 4096, 00:27:38.441 "enable_recv_pipe": true, 00:27:38.441 "enable_quickack": false, 00:27:38.441 "enable_placement_id": 0, 00:27:38.441 "enable_zerocopy_send_server": true, 00:27:38.441 "enable_zerocopy_send_client": false, 00:27:38.441 "zerocopy_threshold": 0, 00:27:38.441 "tls_version": 0, 00:27:38.441 "enable_ktls": false 00:27:38.441 } 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "method": "sock_impl_set_options", 00:27:38.441 "params": { 00:27:38.441 "impl_name": "posix", 00:27:38.441 "recv_buf_size": 2097152, 00:27:38.441 "send_buf_size": 2097152, 00:27:38.441 "enable_recv_pipe": true, 00:27:38.441 "enable_quickack": false, 00:27:38.441 "enable_placement_id": 0, 00:27:38.441 "enable_zerocopy_send_server": true, 00:27:38.441 "enable_zerocopy_send_client": false, 00:27:38.441 "zerocopy_threshold": 0, 00:27:38.441 "tls_version": 0, 00:27:38.441 "enable_ktls": false 00:27:38.441 } 00:27:38.441 } 00:27:38.441 ] 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "subsystem": "vmd", 00:27:38.441 "config": [] 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "subsystem": "accel", 00:27:38.441 "config": [ 00:27:38.441 { 00:27:38.441 "method": "accel_set_options", 00:27:38.441 "params": { 00:27:38.441 "small_cache_size": 128, 00:27:38.441 "large_cache_size": 16, 00:27:38.441 "task_count": 2048, 00:27:38.441 "sequence_count": 2048, 00:27:38.441 "buf_count": 2048 00:27:38.441 } 00:27:38.441 } 00:27:38.441 ] 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "subsystem": "bdev", 00:27:38.441 "config": [ 00:27:38.441 { 00:27:38.441 "method": "bdev_set_options", 00:27:38.441 "params": { 00:27:38.441 "bdev_io_pool_size": 65535, 00:27:38.441 "bdev_io_cache_size": 256, 00:27:38.441 "bdev_auto_examine": true, 00:27:38.441 "iobuf_small_cache_size": 128, 00:27:38.441 "iobuf_large_cache_size": 16 00:27:38.441 } 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "method": "bdev_raid_set_options", 00:27:38.441 "params": { 00:27:38.441 "process_window_size_kb": 1024 00:27:38.441 } 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "method": "bdev_iscsi_set_options", 00:27:38.441 "params": { 00:27:38.441 "timeout_sec": 30 00:27:38.441 } 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "method": "bdev_nvme_set_options", 00:27:38.441 "params": { 00:27:38.441 "action_on_timeout": "none", 00:27:38.441 "timeout_us": 0, 00:27:38.441 "timeout_admin_us": 0, 00:27:38.441 "keep_alive_timeout_ms": 10000, 00:27:38.441 "arbitration_burst": 0, 00:27:38.441 "low_priority_weight": 0, 00:27:38.441 "medium_priority_weight": 0, 00:27:38.441 "high_priority_weight": 0, 00:27:38.441 "nvme_adminq_poll_period_us": 10000, 00:27:38.441 "nvme_ioq_poll_period_us": 0, 00:27:38.441 "io_queue_requests": 512, 00:27:38.441 "delay_cmd_submit": true, 00:27:38.441 "transport_retry_count": 4, 00:27:38.441 "bdev_retry_count": 3, 00:27:38.441 "transport_ack_timeout": 0, 00:27:38.441 "ctrlr_loss_timeout_sec": 0, 00:27:38.441 "reconnect_delay_sec": 0, 00:27:38.441 "fast_io_fail_timeout_sec": 0, 00:27:38.441 "disable_auto_failback": false, 00:27:38.441 "generate_uuids": false, 00:27:38.441 "transport_tos": 0, 00:27:38.441 "nvme_error_stat": false, 00:27:38.441 "rdma_srq_size": 0, 00:27:38.441 "io_path_stat": false, 00:27:38.441 "allow_accel_sequence": false, 00:27:38.441 "rdma_max_cq_size": 0, 00:27:38.441 "rdma_cm_event_timeout_ms": 0, 00:27:38.441 "dhchap_digests": [ 00:27:38.441 "sha256", 00:27:38.441 "sha384", 00:27:38.441 "sha512" 00:27:38.441 ], 00:27:38.441 "dhchap_dhgroups": [ 00:27:38.441 "null", 00:27:38.441 "ffdhe2048", 00:27:38.441 "ffdhe3072", 00:27:38.441 "ffdhe4096", 00:27:38.441 "ffdhe6144", 00:27:38.441 "ffdhe8192" 00:27:38.441 ] 00:27:38.441 } 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "method": "bdev_nvme_attach_controller", 00:27:38.441 "params": { 00:27:38.441 "name": "nvme0", 00:27:38.441 "trtype": "TCP", 00:27:38.441 "adrfam": "IPv4", 00:27:38.441 "traddr": "127.0.0.1", 00:27:38.441 "trsvcid": "4420", 00:27:38.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.441 "prchk_reftag": false, 00:27:38.441 "prchk_guard": false, 00:27:38.441 "ctrlr_loss_timeout_sec": 0, 00:27:38.441 "reconnect_delay_sec": 0, 00:27:38.441 "fast_io_fail_timeout_sec": 0, 00:27:38.441 "psk": "key0", 00:27:38.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.441 "hdgst": false, 00:27:38.441 "ddgst": false 00:27:38.441 } 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "method": "bdev_nvme_set_hotplug", 00:27:38.441 "params": { 00:27:38.441 "period_us": 100000, 00:27:38.441 "enable": false 00:27:38.441 } 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "method": "bdev_wait_for_examine" 00:27:38.441 } 00:27:38.441 ] 00:27:38.441 }, 00:27:38.441 { 00:27:38.441 "subsystem": "nbd", 00:27:38.441 "config": [] 00:27:38.441 } 00:27:38.441 ] 00:27:38.441 }' 00:27:38.441 22:51:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:38.441 [2024-07-15 22:51:21.937396] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:27:38.441 [2024-07-15 22:51:21.937478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392621 ] 00:27:38.698 [2024-07-15 22:51:22.000151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.698 [2024-07-15 22:51:22.116063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.955 [2024-07-15 22:51:22.300238] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:39.540 22:51:22 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:39.540 22:51:22 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:27:39.540 22:51:22 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:39.540 22:51:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.540 22:51:22 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:39.798 22:51:23 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:39.798 22:51:23 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:39.798 22:51:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:39.798 22:51:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:39.798 22:51:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:39.798 22:51:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.798 22:51:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:40.054 22:51:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:40.054 22:51:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:40.054 22:51:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:40.055 22:51:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:40.055 22:51:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:40.055 22:51:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.055 22:51:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:40.323 22:51:23 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:40.323 22:51:23 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:40.323 22:51:23 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:40.323 22:51:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:40.645 22:51:23 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:40.645 22:51:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:40.645 22:51:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.NP1LzjaUys /tmp/tmp.OHuqHpYu51 00:27:40.645 22:51:23 keyring_file -- keyring/file.sh@20 -- # killprocess 1392621 00:27:40.645 22:51:23 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 1392621 ']' 00:27:40.645 22:51:23 keyring_file -- common/autotest_common.sh@946 -- # kill -0 1392621 00:27:40.645 22:51:23 keyring_file -- common/autotest_common.sh@947 -- # uname 00:27:40.645 22:51:23 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:40.645 22:51:23 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1392621 00:27:40.645 22:51:23 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:27:40.645 22:51:23 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:27:40.645 22:51:23 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1392621' 00:27:40.645 killing process with pid 1392621 00:27:40.645 22:51:23 keyring_file -- common/autotest_common.sh@961 -- # kill 1392621 00:27:40.645 Received shutdown signal, test time was about 1.000000 seconds 00:27:40.645 00:27:40.645 Latency(us) 00:27:40.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.645 =================================================================================================================== 00:27:40.645 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:40.645 22:51:23 keyring_file -- common/autotest_common.sh@966 -- # wait 1392621 00:27:40.905 22:51:24 keyring_file -- keyring/file.sh@21 -- # killprocess 1391152 00:27:40.905 22:51:24 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 1391152 ']' 00:27:40.905 22:51:24 keyring_file -- common/autotest_common.sh@946 -- # kill -0 1391152 00:27:40.905 22:51:24 keyring_file -- common/autotest_common.sh@947 -- # uname 00:27:40.905 22:51:24 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:40.905 22:51:24 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1391152 00:27:40.905 22:51:24 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:27:40.905 22:51:24 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:27:40.905 22:51:24 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1391152' 00:27:40.905 killing process with pid 1391152 00:27:40.905 22:51:24 keyring_file -- common/autotest_common.sh@961 -- # kill 1391152 00:27:40.905 [2024-07-15 22:51:24.187026] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:40.905 22:51:24 keyring_file -- common/autotest_common.sh@966 -- # wait 1391152 00:27:41.161 00:27:41.161 real 0m14.245s 00:27:41.161 user 0m34.939s 00:27:41.161 sys 0m3.282s 00:27:41.161 22:51:24 keyring_file -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:41.161 22:51:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:41.161 ************************************ 00:27:41.161 END TEST keyring_file 00:27:41.161 ************************************ 00:27:41.419 22:51:24 -- common/autotest_common.sh@1136 -- # return 0 00:27:41.419 22:51:24 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:41.419 22:51:24 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:41.419 22:51:24 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:27:41.419 22:51:24 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:41.419 22:51:24 -- common/autotest_common.sh@10 -- # set +x 00:27:41.419 ************************************ 00:27:41.419 START TEST keyring_linux 00:27:41.419 ************************************ 00:27:41.419 22:51:24 keyring_linux -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:41.419 * Looking for test storage... 00:27:41.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.419 22:51:24 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.419 22:51:24 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.419 22:51:24 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.419 22:51:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.419 22:51:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.419 22:51:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.419 22:51:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:41.419 22:51:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:41.419 /tmp/:spdk-test:key0 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:41.419 22:51:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:41.419 22:51:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:41.419 /tmp/:spdk-test:key1 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1392981 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:41.419 22:51:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1392981 00:27:41.419 22:51:24 keyring_linux -- common/autotest_common.sh@823 -- # '[' -z 1392981 ']' 00:27:41.419 22:51:24 keyring_linux -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.419 22:51:24 keyring_linux -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:41.419 22:51:24 keyring_linux -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.419 22:51:24 keyring_linux -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:41.419 22:51:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:41.419 [2024-07-15 22:51:24.870407] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:27:41.419 [2024-07-15 22:51:24.870506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392981 ] 00:27:41.677 [2024-07-15 22:51:24.931099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.677 [2024-07-15 22:51:25.046093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@856 -- # return 0 00:27:42.618 22:51:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:42.618 [2024-07-15 22:51:25.836759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.618 null0 00:27:42.618 [2024-07-15 22:51:25.868812] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:42.618 [2024-07-15 22:51:25.869308] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:42.618 22:51:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:42.618 224641029 00:27:42.618 22:51:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:42.618 383529566 00:27:42.618 22:51:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1393122 00:27:42.618 22:51:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1393122 /var/tmp/bperf.sock 00:27:42.618 22:51:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@823 -- # '[' -z 1393122 ']' 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:42.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:42.618 22:51:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:42.618 [2024-07-15 22:51:25.934364] Starting SPDK v24.09-pre git sha1 958a93494 / DPDK 24.03.0 initialization... 00:27:42.618 [2024-07-15 22:51:25.934449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393122 ] 00:27:42.618 [2024-07-15 22:51:25.995589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.618 [2024-07-15 22:51:26.112389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.877 22:51:26 keyring_linux -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:42.877 22:51:26 keyring_linux -- common/autotest_common.sh@856 -- # return 0 00:27:42.877 22:51:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:42.877 22:51:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:43.135 22:51:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:43.135 22:51:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:43.393 22:51:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:43.393 22:51:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:43.650 [2024-07-15 22:51:26.947769] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:43.650 nvme0n1 00:27:43.650 22:51:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:43.650 22:51:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:43.650 22:51:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:43.650 22:51:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:43.650 22:51:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:43.650 22:51:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.907 22:51:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:43.907 22:51:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:43.907 22:51:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:43.907 22:51:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:43.907 22:51:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.907 22:51:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.907 22:51:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:44.164 22:51:27 keyring_linux -- keyring/linux.sh@25 -- # sn=224641029 00:27:44.164 22:51:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:44.164 22:51:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:44.164 22:51:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 224641029 == \2\2\4\6\4\1\0\2\9 ]] 00:27:44.164 22:51:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 224641029 00:27:44.164 22:51:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:44.164 22:51:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:44.164 Running I/O for 1 seconds... 00:27:45.533 00:27:45.533 Latency(us) 00:27:45.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.533 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:45.533 nvme0n1 : 1.02 3856.66 15.07 0.00 0.00 32828.45 7330.32 45826.65 00:27:45.533 =================================================================================================================== 00:27:45.533 Total : 3856.66 15.07 0.00 0.00 32828.45 7330.32 45826.65 00:27:45.533 0 00:27:45.533 22:51:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:45.533 22:51:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:45.533 22:51:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:45.533 22:51:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:45.533 22:51:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:45.533 22:51:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:45.533 22:51:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:45.533 22:51:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.790 22:51:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:45.790 22:51:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:45.790 22:51:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:45.790 22:51:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:45.790 22:51:29 keyring_linux -- common/autotest_common.sh@642 -- # local es=0 00:27:45.790 22:51:29 keyring_linux -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:45.790 22:51:29 keyring_linux -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:27:45.790 22:51:29 keyring_linux -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:45.790 22:51:29 keyring_linux -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:27:45.790 22:51:29 keyring_linux -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:45.790 22:51:29 keyring_linux -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:45.790 22:51:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:46.048 [2024-07-15 22:51:29.409892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:46.048 [2024-07-15 22:51:29.410264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17883f0 (107): Transport endpoint is not connected 00:27:46.048 [2024-07-15 22:51:29.411255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17883f0 (9): Bad file descriptor 00:27:46.048 [2024-07-15 22:51:29.412254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:46.048 [2024-07-15 22:51:29.412277] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:46.048 [2024-07-15 22:51:29.412293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:46.048 request: 00:27:46.048 { 00:27:46.048 "name": "nvme0", 00:27:46.048 "trtype": "tcp", 00:27:46.048 "traddr": "127.0.0.1", 00:27:46.048 "adrfam": "ipv4", 00:27:46.048 "trsvcid": "4420", 00:27:46.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:46.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:46.048 "prchk_reftag": false, 00:27:46.048 "prchk_guard": false, 00:27:46.048 "hdgst": false, 00:27:46.048 "ddgst": false, 00:27:46.048 "psk": ":spdk-test:key1", 00:27:46.048 "method": "bdev_nvme_attach_controller", 00:27:46.048 "req_id": 1 00:27:46.048 } 00:27:46.048 Got JSON-RPC error response 00:27:46.048 response: 00:27:46.048 { 00:27:46.048 "code": -5, 00:27:46.048 "message": "Input/output error" 00:27:46.048 } 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@645 -- # es=1 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@33 -- # sn=224641029 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 224641029 00:27:46.048 1 links removed 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@33 -- # sn=383529566 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 383529566 00:27:46.048 1 links removed 00:27:46.048 22:51:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1393122 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@942 -- # '[' -z 1393122 ']' 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@946 -- # kill -0 1393122 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@947 -- # uname 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1393122 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1393122' 00:27:46.048 killing process with pid 1393122 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@961 -- # kill 1393122 00:27:46.048 Received shutdown signal, test time was about 1.000000 seconds 00:27:46.048 00:27:46.048 Latency(us) 00:27:46.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.048 =================================================================================================================== 00:27:46.048 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:46.048 22:51:29 keyring_linux -- common/autotest_common.sh@966 -- # wait 1393122 00:27:46.304 22:51:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1392981 00:27:46.305 22:51:29 keyring_linux -- common/autotest_common.sh@942 -- # '[' -z 1392981 ']' 00:27:46.305 22:51:29 keyring_linux -- common/autotest_common.sh@946 -- # kill -0 1392981 00:27:46.305 22:51:29 keyring_linux -- common/autotest_common.sh@947 -- # uname 00:27:46.305 22:51:29 keyring_linux -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:46.305 22:51:29 keyring_linux -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1392981 00:27:46.305 22:51:29 keyring_linux -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:27:46.305 22:51:29 keyring_linux -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:27:46.305 22:51:29 keyring_linux -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1392981' 00:27:46.305 killing process with pid 1392981 00:27:46.305 22:51:29 keyring_linux -- common/autotest_common.sh@961 -- # kill 1392981 00:27:46.305 22:51:29 keyring_linux -- common/autotest_common.sh@966 -- # wait 1392981 00:27:46.869 00:27:46.869 real 0m5.521s 00:27:46.869 user 0m10.065s 00:27:46.869 sys 0m1.467s 00:27:46.869 22:51:30 keyring_linux -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:46.869 22:51:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:46.869 ************************************ 00:27:46.869 END TEST keyring_linux 00:27:46.869 ************************************ 00:27:46.869 22:51:30 -- common/autotest_common.sh@1136 -- # return 0 00:27:46.869 22:51:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:46.869 22:51:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:46.869 22:51:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:46.869 22:51:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:46.869 22:51:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:46.869 22:51:30 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:46.869 22:51:30 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:46.869 22:51:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:46.869 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:46.869 22:51:30 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:46.869 22:51:30 -- common/autotest_common.sh@1386 -- # local autotest_es=0 00:27:46.869 22:51:30 -- common/autotest_common.sh@1387 -- # xtrace_disable 00:27:46.869 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:48.766 INFO: APP EXITING 00:27:48.766 INFO: killing all VMs 00:27:48.766 INFO: killing vhost app 00:27:48.766 INFO: EXIT DONE 00:27:49.699 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:27:49.699 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:49.699 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:49.699 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:49.699 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:49.699 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:49.699 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:49.699 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:49.699 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:49.699 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:49.699 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:49.699 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:49.699 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:49.957 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:49.957 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:49.957 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:49.957 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:51.333 Cleaning 00:27:51.333 Removing: /var/run/dpdk/spdk0/config 00:27:51.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:51.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:51.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:51.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:51.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:51.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:51.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:51.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:51.333 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:51.333 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:51.333 Removing: /var/run/dpdk/spdk1/config 00:27:51.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:51.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:51.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:51.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:51.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:51.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:51.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:51.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:51.333 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:51.333 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:51.333 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:51.333 Removing: /var/run/dpdk/spdk2/config 00:27:51.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:51.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:51.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:51.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:51.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:51.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:51.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:51.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:51.333 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:51.333 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:51.333 Removing: /var/run/dpdk/spdk3/config 00:27:51.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:51.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:51.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:51.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:51.334 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:51.334 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:51.334 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:51.334 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:51.334 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:51.334 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:51.334 Removing: /var/run/dpdk/spdk4/config 00:27:51.334 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:51.334 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:51.334 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:51.334 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:51.334 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:51.334 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:51.334 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:51.334 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:51.334 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:51.334 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:51.334 Removing: /dev/shm/bdev_svc_trace.1 00:27:51.334 Removing: /dev/shm/nvmf_trace.0 00:27:51.334 Removing: /dev/shm/spdk_tgt_trace.pid1131040 00:27:51.334 Removing: /var/run/dpdk/spdk0 00:27:51.334 Removing: /var/run/dpdk/spdk1 00:27:51.334 Removing: /var/run/dpdk/spdk2 00:27:51.334 Removing: /var/run/dpdk/spdk3 00:27:51.334 Removing: /var/run/dpdk/spdk4 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1128956 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1129798 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1131040 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1131556 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1132252 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1132394 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1133107 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1133239 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1133480 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1134675 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1135646 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1135912 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1136197 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1136420 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1136610 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1136771 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1136935 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1137115 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1137421 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1139780 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1139942 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1140236 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1140368 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1140704 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1140808 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1141116 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1141245 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1141414 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1141552 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1141722 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1141860 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1142225 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1142498 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1142696 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1142863 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1142898 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1143078 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1143242 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1143473 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1143672 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1143831 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1144022 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1144256 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1144423 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1144582 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1144848 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1145014 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1145204 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1145439 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1145606 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1145854 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1146031 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1146194 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1146470 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1146628 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1146793 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1147070 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1147142 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1147346 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1149523 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1176726 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1179348 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1186450 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1189760 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1192119 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1192636 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1196489 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1201082 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1201084 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1201741 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1202345 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1202941 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1203342 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1203346 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1203606 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1203720 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1203742 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1204297 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1204939 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1205599 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1205996 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1206007 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1206220 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1207056 00:27:51.334 Removing: /var/run/dpdk/spdk_pid1207870 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1213238 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1213514 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1216017 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1219857 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1221910 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1228401 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1234112 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1235307 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1235970 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1246171 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1248332 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1273845 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1276755 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1277938 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1279249 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1279364 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1279412 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1279548 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1279983 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1281300 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1281933 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1282337 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1283951 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1284486 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1285048 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1288030 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1294111 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1296762 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1300661 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1301733 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1302823 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1305501 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1307870 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1312198 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1312206 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1315098 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1315238 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1315375 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1315644 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1315737 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1318532 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1318865 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1321527 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1324135 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1327563 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1330998 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1337297 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1341689 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1341693 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1353769 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1354305 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1354718 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1355215 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1355834 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1356313 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1356889 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1357293 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1360298 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1360441 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1364249 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1364418 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1366030 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1370992 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1371072 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1373967 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1375369 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1376765 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1377505 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1378931 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1379803 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1385132 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1385465 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1385856 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1387420 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1387812 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1388175 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1391152 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1391157 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1392621 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1392981 00:27:51.593 Removing: /var/run/dpdk/spdk_pid1393122 00:27:51.593 Clean 00:27:51.593 22:51:35 -- common/autotest_common.sh@1445 -- # return 0 00:27:51.593 22:51:35 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:51.593 22:51:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:51.593 22:51:35 -- common/autotest_common.sh@10 -- # set +x 00:27:51.850 22:51:35 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:51.850 22:51:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:51.850 22:51:35 -- common/autotest_common.sh@10 -- # set +x 00:27:51.850 22:51:35 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:51.850 22:51:35 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:51.850 22:51:35 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:51.850 22:51:35 -- spdk/autotest.sh@391 -- # hash lcov 00:27:51.850 22:51:35 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:51.850 22:51:35 -- spdk/autotest.sh@393 -- # hostname 00:27:51.850 22:51:35 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:51.850 geninfo: WARNING: invalid characters removed from testname! 00:28:23.979 22:52:03 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:23.979 22:52:07 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:27.268 22:52:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:30.543 22:52:13 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:33.068 22:52:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:36.351 22:52:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:38.882 22:52:22 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:38.882 22:52:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.882 22:52:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:38.882 22:52:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.882 22:52:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.882 22:52:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.882 22:52:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.882 22:52:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.882 22:52:22 -- paths/export.sh@5 -- $ export PATH 00:28:38.882 22:52:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.882 22:52:22 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:38.882 22:52:22 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:38.882 22:52:22 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721076742.XXXXXX 00:28:38.882 22:52:22 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721076742.uiDwAp 00:28:38.882 22:52:22 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:38.882 22:52:22 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:38.882 22:52:22 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:38.882 22:52:22 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:38.882 22:52:22 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:38.882 22:52:22 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:38.882 22:52:22 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:28:38.882 22:52:22 -- common/autotest_common.sh@10 -- $ set +x 00:28:38.882 22:52:22 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:38.882 22:52:22 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:38.882 22:52:22 -- pm/common@17 -- $ local monitor 00:28:38.882 22:52:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.882 22:52:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.882 22:52:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.882 22:52:22 -- pm/common@21 -- $ date +%s 00:28:38.882 22:52:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.882 22:52:22 -- pm/common@21 -- $ date +%s 00:28:38.882 22:52:22 -- pm/common@25 -- $ sleep 1 00:28:38.882 22:52:22 -- pm/common@21 -- $ date +%s 00:28:38.882 22:52:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721076742 00:28:38.882 22:52:22 -- pm/common@21 -- $ date +%s 00:28:38.883 22:52:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721076742 00:28:38.883 22:52:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721076742 00:28:38.883 22:52:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721076742 00:28:38.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721076742_collect-vmstat.pm.log 00:28:38.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721076742_collect-cpu-load.pm.log 00:28:38.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721076742_collect-cpu-temp.pm.log 00:28:38.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721076742_collect-bmc-pm.bmc.pm.log 00:28:39.819 22:52:23 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:39.819 22:52:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:39.819 22:52:23 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:39.819 22:52:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:39.819 22:52:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:39.819 22:52:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:39.819 22:52:23 -- common/autotest_common.sh@728 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:39.819 22:52:23 -- common/autotest_common.sh@729 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:39.819 22:52:23 -- common/autotest_common.sh@731 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:39.819 22:52:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:39.819 22:52:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:39.819 22:52:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:39.819 22:52:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:39.819 22:52:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:39.819 22:52:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:39.819 22:52:23 -- pm/common@44 -- $ pid=1402829 00:28:39.819 22:52:23 -- pm/common@50 -- $ kill -TERM 1402829 00:28:39.819 22:52:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:39.819 22:52:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:39.819 22:52:23 -- pm/common@44 -- $ pid=1402831 00:28:39.819 22:52:23 -- pm/common@50 -- $ kill -TERM 1402831 00:28:39.819 22:52:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:39.819 22:52:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:39.819 22:52:23 -- pm/common@44 -- $ pid=1402833 00:28:39.819 22:52:23 -- pm/common@50 -- $ kill -TERM 1402833 00:28:39.819 22:52:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:39.819 22:52:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:39.819 22:52:23 -- pm/common@44 -- $ pid=1402865 00:28:39.819 22:52:23 -- pm/common@50 -- $ sudo -E kill -TERM 1402865 00:28:39.819 + [[ -n 1046954 ]] 00:28:39.819 + sudo kill 1046954 00:28:39.831 [Pipeline] } 00:28:39.855 [Pipeline] // stage 00:28:39.861 [Pipeline] } 00:28:39.883 [Pipeline] // timeout 00:28:39.889 [Pipeline] } 00:28:39.909 [Pipeline] // catchError 00:28:39.942 [Pipeline] } 00:28:39.988 [Pipeline] // wrap 00:28:39.992 [Pipeline] } 00:28:40.003 [Pipeline] // catchError 00:28:40.010 [Pipeline] stage 00:28:40.012 [Pipeline] { (Epilogue) 00:28:40.020 [Pipeline] catchError 00:28:40.021 [Pipeline] { 00:28:40.030 [Pipeline] echo 00:28:40.031 Cleanup processes 00:28:40.035 [Pipeline] sh 00:28:40.331 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:40.331 1402991 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:40.331 1403093 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:40.348 [Pipeline] sh 00:28:40.632 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:40.633 ++ grep -v 'sudo pgrep' 00:28:40.633 ++ awk '{print $1}' 00:28:40.633 + sudo kill -9 1402991 00:28:40.644 [Pipeline] sh 00:28:40.925 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:49.054 [Pipeline] sh 00:28:49.339 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:49.339 Artifacts sizes are good 00:28:49.353 [Pipeline] archiveArtifacts 00:28:49.360 Archiving artifacts 00:28:49.593 [Pipeline] sh 00:28:49.881 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:49.915 [Pipeline] cleanWs 00:28:49.923 [WS-CLEANUP] Deleting project workspace... 00:28:49.923 [WS-CLEANUP] Deferred wipeout is used... 00:28:49.929 [WS-CLEANUP] done 00:28:49.930 [Pipeline] } 00:28:49.944 [Pipeline] // catchError 00:28:49.953 [Pipeline] sh 00:28:50.229 + logger -p user.info -t JENKINS-CI 00:28:50.237 [Pipeline] } 00:28:50.253 [Pipeline] // stage 00:28:50.257 [Pipeline] } 00:28:50.272 [Pipeline] // node 00:28:50.277 [Pipeline] End of Pipeline 00:28:50.378 Finished: SUCCESS